c3tracker:setup

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
c3tracker:setup [2022/09/04 19:44] kunsic3tracker:setup [2025/01/31 20:59] (current) kunsi
Line 7: Line 7:
   * Compare project properties with previous instalment of the same event   * Compare project properties with previous instalment of the same event
  
-optimal properties from current project 
  
-``` +==== optimal properties from current project ==== 
-Project + 
-Project.Slug eh19 +Replace `camp2023with your chosen tracker project slug. 
-Processing + 
-Processing.Loudnorm.Enable yes +  Meta.Acronym                    camp2023 
-Processing.BasePath /video +  Meta.Album                      Chaos Communication Camp 2023 
-Processing.Path.Intros /video/intros/eh19 +  Meta.License                    Licensed to the public under http://creativecommons.org/licenses/by/4.0 
-Processing.Path.Outro /video/intros/eh19/outro.ts +  Meta.Year                       2023 
-Publishing +   
-Publishing.Path /video/encoded/eh19 +  Processing.Auphonic.Enable      no 
-Publishing.Upload.SkipSlaves speedy,tweety,blade1,blade2,blade3,blade4 +  Processing.BasePath             /video
-Publishing.UploadOptions -i /video/upload-key +  Processing.MasterMe.Enable      yes 
-Publishing.UploadTarget upload@releasing.c3voc.de:/video/encoded/eh19 +  Processing.Path.Intros          /video/intros/camp2023 
-Publishing.Voctoweb.Enable yes +  Processing.Path.Outro           /video/intros/camp2023/outro.ts 
-Publishing.Voctoweb.Path /cdn.media.ccc.de/events/eh2019/ +   
-Publishing.Voctoweb.Thumbpath /static.media.ccc.de/conferences/eh2019 +  Publishing.Upload.SkipSlaves    speedy,tweety,blade1,blade2,blade3,blade4 
-Publishing.Voctoweb.Slug eh19 +  Publishing.UploadTarget         releasing.c3voc.de:/video/encoded/camp2023/ 
-Publishing.Voctoweb.Tags easterhegg, Wien, c3w +  Publishing.Tags                 <additional tags> 
-Publishing.Voctoweb.Url https://media.ccc.de/v+  Publishing.Voctoweb.Enable      yes 
-Publishing.YouTube.Category 27 +  Publishing.Voctoweb.Path        /cdn.media.ccc.de/events/camp2023 
-Publishing.YouTube.Enable yes +  Publishing.Voctoweb.Slug        camp2023 
-Publishing.YouTube.Privacy public +  Publishing.Voctoweb.Thumbpath   /static.media.ccc.de/conferences/camp2023 
-Publishing.YouTube.Tags easterheggWienbun intended, Chaos Computer Club Wien, c3w +  Publishing.YouTube.Category     27 
-Publishing.YouTube.TitlePrefix Easterhegg 2019 - +  Publishing.YouTube.Enable       yes 
-Publishing.YouTube.Token 1/XXXXXXXXXXXX +  Publishing.YouTube.Playlists    <meep> 
-Publishing.Mastodon.Enable yes +  Publishing.YouTube.Privacy      <one of: publicunlistedprivate> 
-Publishing.Twitter.Enable yes +  Publishing.YouTube.Token        <meep> 
-Record +   
-Record.Container TS +  Record.Container                TS 
-Record.Slides yes +  Record.EndPadding               300 
-```+  Record.Slides                   yes 
 +  Record.StartPadding             300  
  
 === Worker Filter Examples === Worker Filter Examples
  
-``` 
-EncodingProfile.IsMaster=no 
-EncodingProfile.IsMaster=yes 
-EncodingProfile.IsMaster= 
-Fahrplan.Room=Servus.at Lab 
-``` 
  
 +  EncodingProfile.IsMaster=no
 +  EncodingProfile.IsMaster=yes
 +  EncodingProfile.IsMaster=
 +  Fahrplan.Room=Servus.at Lab
 +
 +
 +Please note that the conditions in the "project to worker group" filter are currently always evaluated with logical OR. 
  
 +Specifying a property with an empty value, which is often done for `EncodingProfile.IsMaster`, will match if this property does not exist at all on a ticket. So for `EncodingProfile.IsMaster`, specifying an empty filter will match on recording tickets which never have this property.
 == Pipeline setup during event == Pipeline setup during event
  
 During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage: During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage:
  
 +
 +=== Simple: single-room setup (Variant 2)
 +
 +{{drawio>c3tracker:setup-simple.png}}
 +
 +
 +This variant is only practical if you have only one room, or at least one release encoder (aka [[hardware:Minion]]) for each recording cube. 
 +When using this variant with multiple rooms in one Tracker project (like at JEV22), you also have to set room filters in the tracker worker queues.
 +
 +For every worker:
 +* set `EncodingProfile.IsMaster = yes` to avoid encoding all sub formats
 +* (set room filters in tracker e.g. `Fahrplan.Room = Foobar`, but this cannot be used at the same times as the above, see the warning below)
 +
 +For every recoding cube:
 +* start tracker worker: `sudo systemctl start crs-worker.target`
 +
 +For each minion:
 +* mount filesystems from encoder cube: `sudo crs-mount <storage location>`
 +* start tracker scripts for encoding: `sudo systemctl start crs-encoding.service`
 +
 +
 +<panel type="danger" title="Attention">Since tracker filters are joined via OR and not AND, this setup cannot be extended to multiple rooms without hacks if you want to e.g. limit on-site encoding to master formats. Use the  `CRS_ROOM` filter in bundlewrap if you need both tracker filters and room-specific encoding workers.</panel>
  
 === centralised storage (rsync) (Variant 1) === centralised storage (rsync) (Variant 1)
 +
 +{{drawio>c3tracker:setup-central-storage.png}}
 +
  
 The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc. The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc.
Line 62: Line 90:
 For each encoderX start rsync on the central storage: `sudo systemctl start rsync-from-encoder@encoderX.lan.c3voc.de` For each encoderX start rsync on the central storage: `sudo systemctl start rsync-from-encoder@encoderX.lan.c3voc.de`
  
-Then, start tracker workers on storage: `sudo systemctl start crs-worker.target`+Then, start tracker workers on storage: `sudo systemctl start crs-worker.target` (only needed if you don't use `storage.lan.c3voc.de` - worker scripts get started automatically)
  
 ==== Minion setup ==== Minion setup
Line 69: Line 97:
  
 After mounting, you can start the tracker encoding workers: `sudo systemctl start crs-encoding.service` After mounting, you can start the tracker encoding workers: `sudo systemctl start crs-encoding.service`
 +
 +The minion VMs running inside our event colo case automatically mount `storage.lan.c3voc.de` via cifs and start their worker scripts. You usually do not need to touch them.
      
 ==== Cube as worker setup ==== Cube as worker setup
Line 78: Line 108:
 The rest is very similar to above, but with different mounts so `/video/capture` is not hidden: The rest is very similar to above, but with different mounts so `/video/capture` is not hidden:
  
-```bash + 
-sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/fuse /video/fuse +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/fuse /video/fuse 
-sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/video/intros /video/intros +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/video/intros /video/intros 
-sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/tmp /video/tmp +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/tmp /video/tmp 
-sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/encoded /video/encoded +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/encoded /video/encoded 
-```+
      
-=== decentralised classic (Variant 2) 
  
-The second variant is only practical if you have at least one release encoder (aka [[hardware:Minion]]for each recording cube. When using this variant with multiple rooms, you also have to set room filters in the tracker worker queues.+=== decentralised pipeline aka "even more samba" (Variant 3)
  
-For every worker: +<panel type="danger" title="Attention">The "decentralized pipeline (Variant 3)" should not be used by inexperienced users. Use the information above to find out how to get this variant working, then adjust/improve the documentation here.</panel>
-* set room filters in tracker+
  
-For every recoding cube: +Similar to variant 2, but extended to work with multiple rooms. Instead of using rsync, recorded snippets remain on the encoding cubes and ''/video/fuse/$event/$room'' are exposed via samba to the minions, while the encoded and tmp files live on one "central" minion; all other minions mount ''/video/encoded'' and ''/video/tmp'' from the primary minion [reasoningthe tracker cannot guarantee that the machine which encoded a talk also does the postprocessing (upload) step, so all minions have to see the same files].
-* start tracker worker: `sudo systemctl start crs-worker.target`+
  
-For each minion: +Tracker filters have to be set only for the recording cubes, minions do not require any filters (but on smaller events without many minions, a ''EncodingProfile.IsMaster=yes'' filter can be a good idea, so sub formats won't crowd out the queues — they can always be encoded off-site later).
-* mount filesystems from encoder cube: `sudo crs-mount <storage location>+
-* start tracker scripts for encoding: `sudo systemctl start crs-encoding.service`+
  
-=== decentralised pipeline (Variant 3)+On recording cubes: start the followin systemd units: 
 + * ''crs-recording-scheduler'' 
 + * ''crs-mount4cut'' 
 + * ''crs-cut-postprocessor''
  
-<panel type="danger" title="Outdated">Information for "decentralized pipeline (Variant 3)" is outdated and should not be used. Use the information above to find out how to get this variant workingthen adjust the documentation here.</panel>+On all minionsincluding the one acting as storage, do
  
-Similar to variant 2, but the release encoder (minion) only mounts the /video/fuse/$roomfrom each recording cube. The encoded and tmp files life on one minionthe secondary minions mount /video/encoded and /video/tmp from the primary minion[Reason: It is not guaranteed that the minion which encoded a talk also does the postprocessing (upload) step.]+  mkdir -p /video/fuse/$event/{$room1$room2, ..} 
 +  mount.cifs -o uid=voc,password= {//$encoder1.lan.c3voc.de,}/video/fuse/$event/$room1 
 +  mount.cifs -o uid=voc,password= {//$encoder2.lan.c3voc.de,}/video/fuse/$event/$room2 
 +  ...
  
-You have to set the room filters only for the recording cubesthe minions can process talks independently.+On all minions except the one acting as storagealso mount:
  
-Modify ''/opt/crs/tools/tracker3.0/screenrc-scripts'': +  mount.cifs -o uid=voc,password= //$storage.lan.c3voc.de/encoded /video/encoded 
- * On recording cubes: comment out steps ''D-encoding''''E-postencoding'', and ''F-postprocessing'' +  mount.cifs -o uid=voc,password= //$storage.lan.c3voc.de/tmp /video/tmp 
- * On release encoders: comment out steps  ''A-recording-scheduler''''B-mount4cut''and ''C-cut-postprocessor''  +  mount.cifs -o uid=voc,password= {//$storage.lan.c3voc.de,}/video/intros
  
-After modification, start the workers via the main screenrc on both recording cubes and release encoders:+Finally on all minions, including the one acting as storage, start the following systemd units: 
 + * ''crs-encoding'' 
 + * ''crs-postencoding'' 
 + * ''crs-postprocessing''
  
-  sudo ./start screenrc-pipeline+==== Old example with systemd units and case 1 and 5, which was used during jev22 in Munich:
  
 +{{drawio>c3tracker:setup-variant-3.png}}
  
-==== Example with case 5 and 6:+optionalconfigure `10.73.0.2` (aka `storage.lan.c3voc.de`) on the master minion as secondary ip
  
-on recording cube, without intros either copy or mount the intros from their source (here encoder6)  +on recording cubes, mount or copy the intros from their source – here `storage.lan.c3voc.de` 
 + 
 +  sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros 
 +  sudo systemctl start crs-recording-scheduler  # A 
 +  sudo systemctl start crs-mount4cut            # B 
 +  sudo systemctl start crs-cut-postprocessor    # C 
 +   
 +  # check if everything is running as expected – you might have to disable/stop the other CRS workers D-F 
 +  sudo systemctl status -n 0 crs-*  
 + 
 +on master minion (in this example `storage.lan.c3voc.de`) 
 + 
 + 
 +  mkdir -p /video/fuse/jev22/{Ahlam,Bhavani} 
 +  mount -t cifs -o password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam  
 +  mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani 
 +   
 +  sudo systemctl start crs-encoding             # D-encoding 
 +  sudo systemctl start crs-postencoding         # E-postencoding-auphonic 
 +  sudo systemctl start crs-postprocessing       # F-postprocessing-upload 
 +   
 +  # check if everything is running as expected – you might have to disable/stop the other CRS workers A-C 
 +  sudo systemctl status -n 0 crs-*  
 + 
 + 
 +//(ensure that samba is installed on this master minion aka storage)// 
 + 
 + 
 +on other minions 
 + 
 +  mkdir -p /video/fuse/jev22/{Ahlam,Bhavani} 
 +  mount -t cifs -o uid=voc,password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam  
 +  mount -t cifs -o uid=voc,password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani 
 +  mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded 
 +  mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded 
 +  mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp 
 +  mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros 
 +    
 +    
 + 
 +==== Old example with custom screenrc and case 5 and 6: 
 + 
 +on recording cube, without intros either copy or mount the intros from their source  
  
   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
Line 126: Line 203:
  
 on master minion (in this example minion5) on master minion (in this example minion5)
-``` 
-mount -t cifs -o password= //encoder5.lan.c3voc.de/video/fuse/podstock2019/Aussenbuehne /video/fuse/podstock2019/Aussenbuehne 
-mount -t cifs -o password= //encoder6.lan.c3voc.de/video/fuse/podstock2019/Innenbuehne /video/fuse/podstock2019/Innenbuehne 
-mount -t cifs -o password= //encoder6.lan.c3voc.de/video/intros /video/intros 
-cd /opt/crs/tools/tracker3.0/ 
-sudo ./start screenrc-pipeline # with steps D, E, F 
  
-```+  mount -t cifs -o password= //encoder5.lan.c3voc.de/video/fuse/podstock2019/Aussenbuehne /video/fuse/podstock2019/Aussenbuehne 
 +  mount -t cifs -o password= //encoder6.lan.c3voc.de/video/fuse/podstock2019/Innenbuehne /video/fuse/podstock2019/Innenbuehne 
 +  mount -t cifs -o password= //encoder6.lan.c3voc.de/video/intros /video/intros 
 +  cd /opt/crs/tools/tracker3.0/ 
 +  sudo ./start screenrc-pipeline # with steps D, E, F 
 //(ensure that samba is installed on this master minion)// //(ensure that samba is installed on this master minion)//
  
  
 on other minions on other minions
-``` 
-mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/podstock2019/Aussenbuehne  
-mount -t cifs -o password= {//encoder6.lan.c3voc.de,}/video/fuse/podstock2019/Innenbuehne 
-mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded 
-mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded 
-mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp 
-mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros 
-cd /opt/crs/tools/tracker3.0/ 
-sudo ./start screenrc-encoding-only # only step E 
-``` 
  
 +  mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/podstock2019/Aussenbuehne 
 +  mount -t cifs -o password= {//encoder6.lan.c3voc.de,}/video/fuse/podstock2019/Innenbuehne
 +  mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp
 +  mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
 +  cd /opt/crs/tools/tracker3.0/
 +  sudo ./start screenrc-encoding-only # only step E
  • c3tracker/setup.1662313448.txt.gz
  • Last modified: 2022/09/04 19:44
  • by kunsi