c3tracker:setup

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
c3tracker:setup [2022/09/04 19:44] kunsic3tracker:setup [2023/09/02 09:08] (current) kunsi
Line 7: Line 7:
   * Compare project properties with previous instalment of the same event   * Compare project properties with previous instalment of the same event
  
-optimal properties from current project+==== optimal properties from current project ====
  
 ``` ```
-Project +Meta.Acronym                    camp2023 
-Project.Slug eh19 +Meta.Album                      Chaos Communication Camp 2023 
-Processing +Meta.License                    Licensed to the public under http://creativecommons.org/licenses/by/4.0 
-Processing.Loudnorm.Enable yes +Meta.Year                       2023 
-Processing.BasePath /video + 
-Processing.Path.Intros /video/intros/eh19 +Processing.Auphonic.Enable      no 
-Processing.Path.Outro /video/intros/eh19/outro.ts +Processing.BasePath             /video
-Publishing +Processing.MasterMe.Enable      yes 
-Publishing.Path /video/encoded/eh19 +Processing.Path.Intros          /video/intros/camp2023 
-Publishing.Upload.SkipSlaves speedy,tweety,blade1,blade2,blade3,blade4 +Processing.Path.Outro           /video/intros/camp2023/outro.ts 
-Publishing.UploadOptions -i /video/upload-key + 
-Publishing.UploadTarget upload@releasing.c3voc.de:/video/encoded/eh19 +Publishing.Path                 /video/encoded/camp2023/ 
-Publishing.Voctoweb.Enable yes +Publishing.Upload.SkipSlaves    speedy,tweety,blade1,blade2,blade3,blade4 
-Publishing.Voctoweb.Path /cdn.media.ccc.de/events/eh2019/ +Publishing.UploadTarget         upload@releasing.c3voc.de:/video/encoded/camp2023/ 
-Publishing.Voctoweb.Thumbpath /static.media.ccc.de/conferences/eh2019 +Publishing.Voctoweb.Enable      yes 
-Publishing.Voctoweb.Slug eh19 +Publishing.Voctoweb.Path        /cdn.media.ccc.de/events/camp2023 
-Publishing.Voctoweb.Tags easterhegg, Wien, c3w +Publishing.Voctoweb.Slug        camp2023 
-Publishing.Voctoweb.Url https://media.ccc.de/v+Publishing.Voctoweb.Tags  <additional tags> 
-Publishing.YouTube.Category 27 +Publishing.Voctoweb.Thumbpath   /static.media.ccc.de/conferences/camp2023 
-Publishing.YouTube.Enable yes +Publishing.YouTube.Category     27 
-Publishing.YouTube.Privacy public +Publishing.YouTube.Enable       yes 
-Publishing.YouTube.Tags easterheggWien, bun intended, Chaos Computer Club Wienc3w +Publishing.YouTube.Playlists    <meep> 
-Publishing.YouTube.TitlePrefix Easterhegg 2019 - +Publishing.YouTube.Privacy      <one of: publicunlistedprivate> 
-Publishing.YouTube.Token 1/XXXXXXXXXXXX +Publishing.YouTube.Tags         <additional tags> 
-Publishing.Mastodon.Enable yes +Publishing.YouTube.Token        <meep> 
-Publishing.Twitter.Enable yes + 
-Record +Record.Container                TS 
-Record.Container TS +Record.EndPadding               300 
-Record.Slides yes+Record.Slides                   yes 
 +Record.StartPadding             300 
 ``` ```
  
Line 50: Line 51:
 ``` ```
  
 +Please note that the conditions in the "project to worker group" filter are currently evaluated with logical OR. 
  
 +Specifying a property with an empty value, which is often done for EncodingProfile.IsMaster, will match if this property does not exist at all on a ticket. So for EncodingProfile.IsMaster, specifying an empty filter will match on recording tickets which never have this property.
 == Pipeline setup during event == Pipeline setup during event
  
 During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage: During event setup of the pipeline, you have to decide if you want leave the MPEG TS snippets only on the [[hardware:encoder|recording cubes]] or also rsync them to a central storage:
 +
 +
 +=== Simple: decentralised classic (Variant 2)
 +
 +{{drawio>c3tracker:setup-simple.png}}
 +
 +
 +This variant is only practical if you have only one room, or at least one release encoder (aka [[hardware:Minion]]) for each recording cube. 
 +When using this variant with multiple rooms in one Tracker project (like at JEV22), you also have to set room filters in the tracker worker queues.
 +
 +For every worker:
 +* set room filters in tracker e.g. `Fahrplan.Room = Foobar`
 +
 +For every recoding cube:
 +* start tracker worker: `sudo systemctl start crs-worker.target`
 +
 +For each minion:
 +* mount filesystems from encoder cube: `sudo crs-mount <storage location>`
 +* start tracker scripts for encoding: `sudo systemctl start crs-encoding.service`
  
  
 === centralised storage (rsync) (Variant 1) === centralised storage (rsync) (Variant 1)
 +
 +{{drawio>c3tracker:setup-central-storage.png}}
 +
  
 The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc. The first variant is typically used for events with more than one room. For bigger events we use the dedicated [[hardware:event-storage|storage]] server in the event server rack, for smaller events a USB connected hard drive to one of the minions might be sufficient. Each recording cube exposes the files via rsyncd, which are pulled by an rsync process running inside a screen on the storage pc.
Line 62: Line 87:
 For each encoderX start rsync on the central storage: `sudo systemctl start rsync-from-encoder@encoderX.lan.c3voc.de` For each encoderX start rsync on the central storage: `sudo systemctl start rsync-from-encoder@encoderX.lan.c3voc.de`
  
-Then, start tracker workers on storage: `sudo systemctl start crs-worker.target`+Then, start tracker workers on storage: `sudo systemctl start crs-worker.target` (only needed if you don't use `storage.lan.c3voc.de` - worker scripts get started automatically)
  
 ==== Minion setup ==== Minion setup
Line 69: Line 94:
  
 After mounting, you can start the tracker encoding workers: `sudo systemctl start crs-encoding.service` After mounting, you can start the tracker encoding workers: `sudo systemctl start crs-encoding.service`
 +
 +The minion VMs running inside our event colo case automatically mount `storage.lan.c3voc.de` via cifs and start their worker scripts. You usually do not need to touch them.
      
 ==== Cube as worker setup ==== Cube as worker setup
Line 85: Line 112:
 ``` ```
      
-=== decentralised classic (Variant 2) 
  
-The second variant is only practical if you have at least one release encoder (aka [[hardware:Minion]]for each recording cube. When using this variant with multiple rooms, you also have to set room filters in the tracker worker queues.+=== decentralised pipeline (Variant 3)
  
-For every worker: +<panel type="danger" title="Attention">The "decentralized pipeline (Variant 3)" should not be used by inexperienced users. Use the information above to find out how to get this variant working, then adjust/improve the documentation here.</panel>
-* set room filters in tracker+
  
-For every recoding cube: +Similar to variant 2, but the release encoder (minion) only mounts the /video/fuse/$room/ from each recording cube. The encoded and tmp files life on one minion, the secondary minions mount /video/encoded and /video/tmp from the primary minion. [ReasonIt is not guaranteed that the minion which encoded a talk also does the postprocessing (upload) step.]
-* start tracker worker: `sudo systemctl start crs-worker.target`+
  
-For each minion: +You have to set the room filters only for the recording cubes, the minions can process talks independently.
-* mount filesystems from encoder cube: `sudo crs-mount <storage location>+
-* start tracker scripts for encoding: `sudo systemctl start crs-encoding.service`+
  
-=== decentralised pipeline (Variant 3)+  * On recording cubes:   start systemd units for steps A ''crs-recording-scheduler'', B ''crs-mount4cut'', and C ''crs-cut-postprocessor''         
 +  * On release encoders:  start systemd units for steps D ''crs-encoding'', E ''crs-postencoding'', and F ''crs-postprocessing''
  
-<panel type="danger" title="Outdated">Information for "decentralized pipeline (Variant 3)" is outdated and should not be used. Use the information above to find out how to get this variant working, then adjust the documentation here.</panel> 
  
-Similar to variant 2, but the release encoder (minion) only mounts the /video/fuse/$room/ from each recording cube. The encoded and tmp files life on one minion, the secondary minions mount /video/encoded and /video/tmp from the primary minion. [Reason: It is not guaranteed that the minion which encoded a talk also does the postprocessing (upload) step.] 
  
-You have to set the room filters only for the recording cubes, the minions can process talks independently.+==== New example with systemd units and case 1 and 5:
  
-Modify ''/opt/crs/tools/tracker3.0/screenrc-scripts'': +{{drawio>c3tracker:setup-variant-3.png}}
- * On recording cubes: comment out steps ''D-encoding'', ''E-postencoding'', and ''F-postprocessing'' +
- * On release encoders: comment out steps  ''A-recording-scheduler'', ''B-mount4cut'', and ''C-cut-postprocessor''  +
  
-After modification, start the workers via the main screenrc on both recording cubes and release encoders:+optional: configure `10.73.0.2` (aka `storage.lan.c3voc.de`) on the master minion as secondary ip
  
-  sudo ./start screenrc-pipeline+on recording cubes, mount or copy the intros from their source – here `storage.lan.c3voc.de`
  
 +  sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
 +  sudo systemctl start crs-recording-scheduler  # A
 +  sudo systemctl start crs-mount4cut            # B
 +  sudo systemctl start crs-cut-postprocessor    # C
 +  
 +  # check if everything is running as expected – you might have to disable/stop the other CRS workers D-F
 +  sudo systemctl status -n 0 crs-* 
 +
 +on master minion (in this example `storage.lan.c3voc.de`)
 +
 +```
 +mkdir -p /video/fuse/jev22/{Ahlam,Bhavani}
 +mount -t cifs -o password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam 
 +mount -t cifs -o password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani
 +
 +sudo systemctl start crs-encoding             # D-encoding
 +sudo systemctl start crs-postencoding         # E-postencoding-auphonic
 +sudo systemctl start crs-postprocessing       # F-postprocessing-upload
 +
 +# check if everything is running as expected – you might have to disable/stop the other CRS workers A-C
 +sudo systemctl status -n 0 crs-* 
 +
 +```
 +
 +//(ensure that samba is installed on this master minion aka storage)//
 +
 +
 +on other minions
 +```
 +mkdir -p /video/fuse/jev22/{Ahlam,Bhavani}
 +mount -t cifs -o uid=voc,password= {//encoder1.lan.c3voc.de,}/video/fuse/jev22/Ahlam 
 +mount -t cifs -o uid=voc,password= {//encoder5.lan.c3voc.de,}/video/fuse/jev22/Bhavani
 +mount -t cifs //storage.lan.c3voc.de/encoded /video/encoded
 +mount -t cifs -o password= //storage.lan.c3voc.de/encoded /video/encoded
 +mount -t cifs -o password= //storage.lan.c3voc.de/tmp /video/tmp
 +mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
 +
 +```
 +   
 +   
  
-==== Example with case 5 and 6:+==== Old example with custom screenrc and case 5 and 6:
  
-on recording cube, without intros either copy or mount the intros from their source (here encoder6)  +on recording cube, without intros either copy or mount the intros from their source  
  
   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros   sudo mount -t cifs -o password= {//storage.lan.c3voc.de,}/video/intros
Line 147: Line 206:
 cd /opt/crs/tools/tracker3.0/ cd /opt/crs/tools/tracker3.0/
 sudo ./start screenrc-encoding-only # only step E sudo ./start screenrc-encoding-only # only step E
 +
 ``` ```
  
  • c3tracker/setup.1662313448.txt.gz
  • Last modified: 2022/09/04 19:44
  • by kunsi