At the lowest level, FFmpeg is used server-side through the fluent-ffmpeg library to render a Mash into files of various types. To run FFmpeg commands Movie Masher wraps an instance from this library with an instance of the Command class, that itself is wrapped by a RunningCommand instance which allows the FFmpeg process to be monitored and potentially stopped.
At the highest level, the RenderingServer receives a request to render a MashObject into one or more output files, represented by CommandOutputs. It validates the request and passes it to a new RenderingProcess instance. This creates a specific type of RenderingOutput instance for each CommandOutput provided, which is responsible for converting the Mash into a corresponding RenderingDescription object. The RenderingProcess then converts each of them to a CommandDescription that it creates a RunningCommand with:
Hence, the RenderingProcess interface greatly simplifies generating multiple output files from a MashObject, DefinitionObjects, and CommandOutputs. The underlying RunningCommand interface can be used directly to render output, though any inputs specified must be locally sourced. The base-level Command interface can be used when monitoring isn't needed, or when synchronous results are required.
- VideoOutput: a single video file, with audio tracks
- AudioOutput: a single audio file
- ImageOutput: a single image file
- ImageSequenceOutput: multple image files, one for each frame
- WaveformOutput: a single image file, representing audio visually
Each RenderingOutput will return a different RenderingDescription for the same Mash content. For instance, one returned by an AudioOutput will only describe a range of its audible content while one from an ImageOutput will only describe its visible content at a single point in time. A VideoOutput will return one describing a range of both, with the visible content potentially broken up into smaller chunks. To accommodate all these cases, the RenderingDescription can contain:
- a single CommandDescription object, describing the audible content
- multiple CommandDescriptions object, describing the visual content
- a single RenderingCommandOutput object that specifies output options
The CommandDescription object structure closely matches FFmpeg's command line options and as such can contain:
- multiple CommandInputs, each describing a raw media
sourcefile plus related input
optionslike start time or duration
- a single CommandOutput object describing the output file's AV codecs, bitrates, and dimensions as well as its file extension and format plus related output
- multiple GraphFilters, each describing an FFmpeg filter to apply (AKA a filtergraph)
- a single FilterGraph instance, describing audible content
- multiple FilterGraph instances, describing visible content
Each FilterGraph describes a section of the Mash that can be conveniently cached and rendered together, including just the Clips in that section relevant to the [[RendingOutput]]. Together that are used to build the remaining RenderingDescription data:
- multiple CommandInput objects, describing any input files
- mutiple GraphFilter objects, describing any filters to apply
In keeping with FFmpeg, the RenderingOutput will supply input files and/or filters. For instance, the default Clip is a simple colored rectangle which is adequately described by just the color filter. A Clip with a ShapeContainer is adequately described by just an SVG input file. A Clip with an Image or Video will typically be described both by an input file and multiple filters that size, position and crop it.
The CommandInput interface extends GraphFile, which ultimately describes a file on disk that is made available to FFmpeg during Command execution. Typically this is the raw asset associated with Video, Image, or Audio clips but other resources are also supported. FFmpeg requires files to be specified either as a CommandInput or GraphFilter option value.
Internally, a RenderingOutput will build its CommandInputs from a set of GraphFiles provided by the FilterGraphs. A single Clip may require multiple GraphFiles which may or may not all be converted to CommandInputs. They will all be cached locally though, and their paths ultimately utilized either as direct input, or as filter options.
For instance, a Clip with a TextContainer and ColorContent will require two [[Graphfiles]] - one for the Font and another containing the text itself. Neither of these are converted to CommandInputs because FFmpeg's underlying drawtext filter expects paths to these files to be specified as option values. The font file is cached though, and the text is written to disk.
To assure that all files in a RenderingDescription are available locally, a RenderingOutput will retrieve a promise from its Mash to cache them. In some cases, it will retrieve one directly from its Loader to load specific GraphFiles from the FilterGraphs which are required to determine output duration or dimensions, if it can't be calculated from information supplied in the DefinitionObjects.
The NodeLoader handles all the complexity of caching each GraphFile
locally and correctly providing its file path to GraphFilters.
Caching a GraphFile triggers different postprocessing, depending on its
which can be either a GraphFileType or LoadType.
In the simplest case, the
type property is a GraphFileType which implies the
property will be the actual file content. This is simply saved to the
the RenderingOutput as, for instance, a TEXT, SVG or PNG file. Binary files must be
In cases where the
type property is a LoadType, the
file property will be a
relative or absolute URL. The NodeLoader
instance is configured by the RenderingProcess instance to download absolute URLs, but resolve relative ones to a local directory. Typically, the RenderingServer specifies this as the user's upload directory.