c3tracker:setup

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
c3tracker:setup [2022/05/27 17:35] andic3tracker:setup [2025/01/31 20:59] (current) kunsi
Line 7: Line 7:
   * Compare project properties with previous instalment of the same event   * Compare project properties with previous instalment of the same event
  
-optimal properties from current project 
  
-``` +==== optimal properties from current project ==== 
-Project + 
-Project.Slug eh19 +Replace `camp2023with your chosen tracker project slug. 
-Processing + 
-Processing.Loudnorm.Enable yes +  Meta.Acronym                    camp2023 
-Processing.BasePath /video +  Meta.Album                      Chaos Communication Camp 2023 
-Processing.Path.Intros /video/intros/eh19 +  Meta.License                    Licensed to the public under http://creativecommons.org/licenses/by/4.0 
-Processing.Path.Outro /video/intros/eh19/outro.ts +  Meta.Year                       2023 
-Publishing +   
-Publishing.Path /video/encoded/eh19 +  Processing.Auphonic.Enable      no 
-Publishing.Upload.SkipSlaves speedy,tweety,blade1,blade2,blade3,blade4 +  Processing.BasePath             /video
-Publishing.UploadOptions -i /video/upload-key +  Processing.MasterMe.Enable      yes 
-Publishing.UploadTarget upload@releasing.c3voc.de:/video/encoded/eh19 +  Processing.Path.Intros          /video/intros/camp2023 
-Publishing.Voctoweb.Enable yes +  Processing.Path.Outro           /video/intros/camp2023/outro.ts 
-Publishing.Voctoweb.Path /cdn.media.ccc.de/events/eh2019/ +   
-Publishing.Voctoweb.Thumbpath /static.media.ccc.de/conferences/eh2019 +  Publishing.Upload.SkipSlaves    speedy,tweety,blade1,blade2,blade3,blade4 
-Publishing.Voctoweb.Slug eh19 +  Publishing.UploadTarget         releasing.c3voc.de:/video/encoded/camp2023/ 
-Publishing.Voctoweb.Tags easterhegg, Wien, c3w +  Publishing.Tags                 <additional tags> 
-Publishing.Voctoweb.Url https://media.ccc.de/v+  Publishing.Voctoweb.Enable      yes 
-Publishing.YouTube.Category 27 +  Publishing.Voctoweb.Path        /cdn.media.ccc.de/events/camp2023 
-Publishing.YouTube.Enable yes +  Publishing.Voctoweb.Slug        camp2023 
-Publishing.YouTube.Privacy public +  Publishing.Voctoweb.Thumbpath   /static.media.ccc.de/conferences/camp2023 
-Publishing.YouTube.Tags easterheggWienbun intended, Chaos Computer Club Wien, c3w +  Publishing.YouTube.Category     27 
-Publishing.YouTube.TitlePrefix Easterhegg 2019 - +  Publishing.YouTube.Enable       yes 
-Publishing.YouTube.Token 1/XXXXXXXXXXXX +  Publishing.YouTube.Playlists    <meep> 
-Publishing.Mastodon.Enable yes +  Publishing.YouTube.Privacy      <one of: publicunlistedprivate> 
-Publishing.Twitter.Enable yes +  Publishing.YouTube.Token        <meep> 
-Record +   
-Record.Container TS +  Record.Container                TS 
-Record.Slides yes +  Record.EndPadding               300 
-```+  Record.Slides                   yes 
 +  Record.StartPadding             300  
  
 === Worker Filter Examples === Worker Filter Examples
  
-``` 
-EncodingProfile.IsMaster=no 
-EncodingProfile.IsMaster=yes 
-EncodingProfile.IsMaster= 
-Fahrplan.Room=Servus.at Lab 
-``` 
  
 +  EncodingProfile.IsMaster=no
 +  EncodingProfile.IsMaster=yes
 +  EncodingProfile.IsMaster=
 +  Fahrplan.Room=Servus.at Lab
 +
 +
 +Please note that the conditions in the "project to worker group" filter are currently always evaluated with logical OR. 
  
 +Specifying a property with an empty value, which is often done for `EncodingProfile.IsMaster`, will match if this property does not exist at all on a ticket. So for `EncodingProfile.IsMaster`, specifying an empty filter will match on recording tickets which never have this property.
 == Pipeline setup during event == Pipeline setup during event
  
 During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage: During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage:
  
 +
 +=== Simple: single-room setup (Variant 2)
 +
 +{{drawio>c3tracker:setup-simple.png}}
 +
 +
 +This variant is only practical if you have only one room, or at least one release encoder (aka [[hardware:Minion]]) for each recording cube. 
 +When using this variant with multiple rooms in one Tracker project (like at JEV22), you also have to set room filters in the tracker worker queues.
 +
 +For every worker:
 +* set `EncodingProfile.IsMaster = yes` to avoid encoding all sub formats
 +* (set room filters in tracker e.g. `Fahrplan.Room = Foobar`, but this cannot be used at the same times as the above, see the warning below)
 +
 +For every recoding cube:
 +* start tracker worker: `sudo systemctl start crs-worker.target`
 +
 +For each minion:
 +* mount filesystems from encoder cube: `sudo crs-mount <storage location>`
 +* start tracker scripts for encoding: `sudo systemctl start crs-encoding.service`
 +
 +
 +<panel type="danger" title="Attention">Since tracker filters are joined via OR and not AND, this setup cannot be extended to multiple rooms without hacks if you want to e.g. limit on-site encoding to master formats. Use the  `CRS_ROOM` filter in bundlewrap if you need both tracker filters and room-specific encoding workers.</panel>
  
 === centralised storage (rsync) (Variant 1) === centralised storage (rsync) (Variant 1)
  
-The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc. +{{drawio>c3tracker:setup-central-storage.png}}
-For each encoderX open a screen tab on the central storage+
-  cd /video/capture/<EVENT>/ +
-  while true; do rsync -av --bwlimit=10000 --append --inplace -t encoderX.lan.c3voc.de::video/capture/<EVENT>/ .; sleep 60; done+
  
-start tracker workers on storage 
-  cd /opt/crs/tools/tracker3.0 
-  sudo ./start-screenrc-scripts.sh 
  
-Don't forget to the encoding tab and activate the encoding worker if you want the storage pc to also do some work.+The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc. 
 + 
 +For each encoderX start rsync on the central storage: `sudo systemctl start rsync-from-encoder@encoderX.lan.c3voc.de`
  
 +Then, start tracker workers on storage: `sudo systemctl start crs-worker.target` (only needed if you don't use `storage.lan.c3voc.de` - worker scripts get started automatically)
  
 ==== Minion setup ==== Minion setup
-To allow the encoding workers to do there job, they need to mount several folders on each minion: 
  
-  sudo -+To allow the encoding workers to do their job, they need to mount the storage first: `sudo crs-mount <storage location>` 
-  mount -t cifs //storage.lan.c3voc.de/video /video + 
-  mount -t cifs //storage.lan.c3voc.de/tmp /video/tmp +After mounting, you can start the tracker encoding workers: `sudo systemctl start crs-encoding.service` 
-  mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded + 
-   +The minion VMs running inside our event colo case automatically mount `storage.lan.c3voc.de` via cifs and start their worker scriptsYou usually do not need to touch them.
-The first mount point is read-only, the other two are read-write. +
-   +
-start tracker encoding workers +
-  cd /opt/crs/tools/tracker3.+
-  sudo ./start-screenrc-encoding-only.sh +
-  +
      
 ==== Cube as worker setup ==== Cube as worker setup
Line 94: Line 109:
  
  
-  sudo -s +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/fuse /video/fuse 
-  mount -t cifs -o password= //storage.lan.c3voc.de/fuse /video/fuse +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/video/intros /video/intros 
-  mount -t cifs -o password= //storage.lan.c3voc.de/video/intros /video/intros +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/tmp /video/tmp 
-  mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp +  sudo mount -t cifs -o uid=voc,password= //storage.lan.c3voc.de/encoded /video/encoded 
-  mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded+
      
-=== decentralised classic (Variant 2) 
  
-The second variant is only practical if you have at least one release encoder (aka [[hardware:Minion]]for each recording cube. When using this variant with multiple rooms, you also have to set room filters in the tracker worker queues.+=== decentralised pipeline aka "even more samba" (Variant 3)
  
-For every worker: +<panel type="danger" title="Attention">The "decentralized pipeline (Variant 3)" should not be used by inexperienced users. Use the information above to find out how to get this variant working, then adjust/improve the documentation here.</panel>
-* set room filters in tracker+
  
-For every recoding cube: +Similar to variant 2, but extended to work with multiple rooms. Instead of using rsync, recorded snippets remain on the encoding cubes and ''/video/fuse/$event/$room'' are exposed via samba to the minions, while the encoded and tmp files live on one "central" minion; all other minions mount ''/video/encoded'' and ''/video/tmp'' from the primary minion [reasoning: the tracker cannot guarantee that the machine which encoded a talk also does the postprocessing (upload) step, so all minions have to see the same files].
-* start tracker workers +
-  cd /opt/crs/tools/tracker3.0 +
-  sudo ./start-screenrc-scripts.sh +
- +
  
-For each minion:+Tracker filters have to be set only for the recording cubes, minions do not require any filters (but on smaller events without many minions, a ''EncodingProfile.IsMaster=yes'' filter can be a good idea, so sub formats won't crowd out the queues — they can always be encoded off-site later).
  
-mount filesystems from encoder cube+On recording cubes: start the followin systemd units: 
 + ''crs-recording-scheduler'' 
 + * ''crs-mount4cut'' 
 + * ''crs-cut-postprocessor''
  
-<code>#!/bin/sh +On all minions, including the one acting as storage, do
-e=echo +
-if [ ! -d /video/tmp ]; then +
-  $e mount -t cifs -o password= //encoder$1.lan.c3voc.de/video /video +
-  $e mount -t cifs -o password= //encoder$1.lan.c3voc.de/tmp /video/tmp +
-  $e mount -t cifs -o password= //encoder$1.lan.c3voc.de/encoded /video/encoded +
-else +
-  echo "already mounted" +
-fi</code>+
  
-* start tracker scripts for encoding+  mkdir -p /video/fuse/$event/{$room1, $room2, ..} 
 +  mount.cifs -o uid=voc,password= {//$encoder1.lan.c3voc.de,}/video/fuse/$event/$room1 
 +  mount.cifs -o uid=voc,password= {//$encoder2.lan.c3voc.de,}/video/fuse/$event/$room2 
 +  ...
  
-<code> +On all minions except the one acting as storage, also mount:
-cd /opt/crs/tools/tracker3.0 +
-sudo ./start-screenrc-encoding-only.sh +
-</code>+
  
 +  mount.cifs -o uid=voc,password= //$storage.lan.c3voc.de/encoded /video/encoded
 +  mount.cifs -o uid=voc,password= //$storage.lan.c3voc.de/tmp /video/tmp
 +  mount.cifs -o uid=voc,password= {//$storage.lan.c3voc.de,}/video/intros
  
-=== decentralised pipeline (Variant 3)+Finally on all minions, including the one acting as storage, start the following systemd units: 
 + * ''crs-encoding'' 
 + * ''crs-postencoding'' 
 + * ''crs-postprocessing''
  
-Similar to variant 2, but the release encoder (minion) only mounts the /video/fuse/$room/ from each recording cube. The encoded and tmp files life on one minionthe secondary minions mount /video/encoded and /video/tmp from the primary minion. [ReasonIt is not guaranteed that the minion which encoded a talk also does the postprocessing (upload) step.]+==== Old example with systemd units and case 1 and 5which was used during jev22 in Munich:
  
-You have to set the room filters only for the recording cubes, the minions can process talks independently.+{{drawio>c3tracker:setup-variant-3.png}}
  
-Modify ''/opt/crs/tools/tracker3.0/screenrc-scripts'': +optional: configure `10.73.0.2` (aka `storage.lan.c3voc.de`) on the master minion as secondary ip
- * On recording cubes: comment out steps ''D-encoding'', ''E-postencoding'', and ''F-postprocessing'' +
- * On release encoders: comment out steps  ''A-recording-scheduler'', ''B-mount4cut'', and ''C-cut-postprocessor''  +
  
-After modification, start the workers via the main screenrc on both recording cubes and release encoders:+on recording cubes, mount or copy the intros from their source – here `storage.lan.c3voc.de`
  
-  sudo ./start screenrc-pipeline+  sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros 
 +  sudo systemctl start crs-recording-scheduler  # A 
 +  sudo systemctl start crs-mount4cut            # B 
 +  sudo systemctl start crs-cut-postprocessor    # C 
 +   
 +  # check if everything is running as expected – you might have to disable/stop the other CRS workers D-F 
 +  sudo systemctl status -n 0 crs-
  
 +on master minion (in this example `storage.lan.c3voc.de`)
 +
 +
 +  mkdir -p /video/fuse/jev22/{Ahlam,Bhavani}
 +  mount -t cifs -o password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam 
 +  mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani
 +  
 +  sudo systemctl start crs-encoding             # D-encoding
 +  sudo systemctl start crs-postencoding         # E-postencoding-auphonic
 +  sudo systemctl start crs-postprocessing       # F-postprocessing-upload
 +  
 +  # check if everything is running as expected – you might have to disable/stop the other CRS workers A-C
 +  sudo systemctl status -n 0 crs-* 
 +
 +
 +//(ensure that samba is installed on this master minion aka storage)//
 +
 +
 +on other minions
 +
 +  mkdir -p /video/fuse/jev22/{Ahlam,Bhavani}
 +  mount -t cifs -o uid=voc,password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam 
 +  mount -t cifs -o uid=voc,password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani
 +  mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp
 +  mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
 +   
 +   
  
-==== Example with case 5 and 6:+==== Old example with custom screenrc and case 5 and 6:
  
-on recording cube, without intros either copy or mount the intros from their source (here encoder6)  +on recording cube, without intros either copy or mount the intros from their source  
  
   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
Line 160: Line 203:
  
 on master minion (in this example minion5) on master minion (in this example minion5)
-``` 
-mount -t cifs -o password= //encoder5.lan.c3voc.de/video/fuse/podstock2019/Aussenbuehne /video/fuse/podstock2019/Aussenbuehne 
-mount -t cifs -o password= //encoder6.lan.c3voc.de/video/fuse/podstock2019/Innenbuehne /video/fuse/podstock2019/Innenbuehne 
-mount -t cifs -o password= //encoder6.lan.c3voc.de/video/intros /video/intros 
-cd /opt/crs/tools/tracker3.0/ 
-sudo ./start screenrc-pipeline # with steps D, E, F 
  
-```+  mount -t cifs -o password= //encoder5.lan.c3voc.de/video/fuse/podstock2019/Aussenbuehne /video/fuse/podstock2019/Aussenbuehne 
 +  mount -t cifs -o password= //encoder6.lan.c3voc.de/video/fuse/podstock2019/Innenbuehne /video/fuse/podstock2019/Innenbuehne 
 +  mount -t cifs -o password= //encoder6.lan.c3voc.de/video/intros /video/intros 
 +  cd /opt/crs/tools/tracker3.0/ 
 +  sudo ./start screenrc-pipeline # with steps D, E, F 
 //(ensure that samba is installed on this master minion)// //(ensure that samba is installed on this master minion)//
  
  
 on other minions on other minions
-``` 
-mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/podstock2019/Aussenbuehne  
-mount -t cifs -o password= {//encoder6.lan.c3voc.de,}/video/fuse/podstock2019/Innenbuehne 
-mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded 
-mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded 
-mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp 
-mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros 
-cd /opt/crs/tools/tracker3.0/ 
-sudo ./start screenrc-encoding-only # only step E 
-``` 
  
 +  mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/podstock2019/Aussenbuehne 
 +  mount -t cifs -o password= {//encoder6.lan.c3voc.de,}/video/fuse/podstock2019/Innenbuehne
 +  mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded
 +  mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp
 +  mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
 +  cd /opt/crs/tools/tracker3.0/
 +  sudo ./start screenrc-encoding-only # only step E
  • c3tracker/setup.1653665740.txt.gz
  • Last modified: 2022/05/27 17:35
  • by andi