You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

571 lines
21KB

  1. @chapter Muxers
  2. @c man begin MUXERS
  3. Muxers are configured elements in Libav which allow writing
  4. multimedia streams to a particular type of file.
  5. When you configure your Libav build, all the supported muxers
  6. are enabled by default. You can list all available muxers using the
  7. configure option @code{--list-muxers}.
  8. You can disable all the muxers with the configure option
  9. @code{--disable-muxers} and selectively enable / disable single muxers
  10. with the options @code{--enable-muxer=@var{MUXER}} /
  11. @code{--disable-muxer=@var{MUXER}}.
  12. The option @code{-formats} of the av* tools will display the list of
  13. enabled muxers.
  14. A description of some of the currently available muxers follows.
  15. @anchor{crc}
  16. @section crc
  17. CRC (Cyclic Redundancy Check) testing format.
  18. This muxer computes and prints the Adler-32 CRC of all the input audio
  19. and video frames. By default audio frames are converted to signed
  20. 16-bit raw audio and video frames to raw video before computing the
  21. CRC.
  22. The output of the muxer consists of a single line of the form:
  23. CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
  24. 8 digits containing the CRC for all the decoded input frames.
  25. For example to compute the CRC of the input, and store it in the file
  26. @file{out.crc}:
  27. @example
  28. avconv -i INPUT -f crc out.crc
  29. @end example
  30. You can print the CRC to stdout with the command:
  31. @example
  32. avconv -i INPUT -f crc -
  33. @end example
  34. You can select the output format of each frame with @command{avconv} by
  35. specifying the audio and video codec and format. For example to
  36. compute the CRC of the input audio converted to PCM unsigned 8-bit
  37. and the input video converted to MPEG-2 video, use the command:
  38. @example
  39. avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
  40. @end example
  41. See also the @ref{framecrc} muxer.
  42. @anchor{framecrc}
  43. @section framecrc
  44. Per-frame CRC (Cyclic Redundancy Check) testing format.
  45. This muxer computes and prints the Adler-32 CRC for each decoded audio
  46. and video frame. By default audio frames are converted to signed
  47. 16-bit raw audio and video frames to raw video before computing the
  48. CRC.
  49. The output of the muxer consists of a line for each audio and video
  50. frame of the form: @var{stream_index}, @var{frame_dts},
  51. @var{frame_size}, 0x@var{CRC}, where @var{CRC} is a hexadecimal
  52. number 0-padded to 8 digits containing the CRC of the decoded frame.
  53. For example to compute the CRC of each decoded frame in the input, and
  54. store it in the file @file{out.crc}:
  55. @example
  56. avconv -i INPUT -f framecrc out.crc
  57. @end example
  58. You can print the CRC of each decoded frame to stdout with the command:
  59. @example
  60. avconv -i INPUT -f framecrc -
  61. @end example
  62. You can select the output format of each frame with @command{avconv} by
  63. specifying the audio and video codec and format. For example, to
  64. compute the CRC of each decoded input audio frame converted to PCM
  65. unsigned 8-bit and of each decoded input video frame converted to
  66. MPEG-2 video, use the command:
  67. @example
  68. avconv -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
  69. @end example
  70. See also the @ref{crc} muxer.
  71. @anchor{hls}
  72. @section hls
  73. Apple HTTP Live Streaming muxer that segments MPEG-TS according to
  74. the HTTP Live Streaming specification.
  75. It creates a playlist file and numbered segment files. The output
  76. filename specifies the playlist filename; the segment filenames
  77. receive the same basename as the playlist, a sequential number and
  78. a .ts extension.
  79. @example
  80. avconv -i in.nut out.m3u8
  81. @end example
  82. @table @option
  83. @item -hls_time @var{seconds}
  84. Set the segment length in seconds.
  85. @item -hls_list_size @var{size}
  86. Set the maximum number of playlist entries.
  87. @item -hls_wrap @var{wrap}
  88. Set the number after which index wraps.
  89. @item -start_number @var{number}
  90. Start the sequence from @var{number}.
  91. @item -hls_base_url @var{baseurl}
  92. Append @var{baseurl} to every entry in the playlist.
  93. Useful to generate playlists with absolute paths.
  94. @item -hls_allow_cache @var{allowcache}
  95. Explicitly set whether the client MAY (1) or MUST NOT (0) cache media segments
  96. @item -hls_version @var{version}
  97. Set the protocol version. Enables or disables version-specific features
  98. such as the integer (version 2) or decimal EXTINF values (version 3).
  99. @item -hls_enc @var{enc}
  100. Enable (1) or disable (0) the AES128 encryption.
  101. When enabled every segment generated is encrypted and the encryption key
  102. is saved as @var{playlist name}.key.
  103. @item -hls_enc_key @var{key}
  104. Use the specified hex-coded 16byte key to encrypt the segments, by default it
  105. is randomly generated.
  106. @item -hls_enc_key_url @var{keyurl}
  107. If set, @var{keyurl} is prepended instead of @var{baseurl} to the key filename
  108. in the playlist.
  109. @item -hls_enc_iv @var{iv}
  110. Use a specified hex-coded 16byte initialization vector for every segment instead
  111. of the autogenerated ones.
  112. @end table
  113. @anchor{image2}
  114. @section image2
  115. Image file muxer.
  116. The image file muxer writes video frames to image files.
  117. The output filenames are specified by a pattern, which can be used to
  118. produce sequentially numbered series of files.
  119. The pattern may contain the string "%d" or "%0@var{N}d", this string
  120. specifies the position of the characters representing a numbering in
  121. the filenames. If the form "%0@var{N}d" is used, the string
  122. representing the number in each filename is 0-padded to @var{N}
  123. digits. The literal character '%' can be specified in the pattern with
  124. the string "%%".
  125. If the pattern contains "%d" or "%0@var{N}d", the first filename of
  126. the file list specified will contain the number 1, all the following
  127. numbers will be sequential.
  128. The pattern may contain a suffix which is used to automatically
  129. determine the format of the image files to write.
  130. For example the pattern "img-%03d.bmp" will specify a sequence of
  131. filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
  132. @file{img-010.bmp}, etc.
  133. The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
  134. form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
  135. etc.
  136. The following example shows how to use @command{avconv} for creating a
  137. sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
  138. taking one image every second from the input video:
  139. @example
  140. avconv -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
  141. @end example
  142. Note that with @command{avconv}, if the format is not specified with the
  143. @code{-f} option and the output filename specifies an image file
  144. format, the image2 muxer is automatically selected, so the previous
  145. command can be written as:
  146. @example
  147. avconv -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
  148. @end example
  149. Note also that the pattern must not necessarily contain "%d" or
  150. "%0@var{N}d", for example to create a single image file
  151. @file{img.jpeg} from the input video you can employ the command:
  152. @example
  153. avconv -i in.avi -f image2 -frames:v 1 img.jpeg
  154. @end example
  155. @table @option
  156. @item -start_number @var{number}
  157. Start the sequence from @var{number}.
  158. @item -update @var{number}
  159. If @var{number} is nonzero, the filename will always be interpreted as just a
  160. filename, not a pattern, and this file will be continuously overwritten with new
  161. images.
  162. @end table
  163. @section matroska
  164. Matroska container muxer.
  165. This muxer implements the matroska and webm container specs.
  166. The recognized metadata settings in this muxer are:
  167. @table @option
  168. @item title=@var{title name}
  169. Name provided to a single track
  170. @end table
  171. @table @option
  172. @item language=@var{language name}
  173. Specifies the language of the track in the Matroska languages form
  174. @end table
  175. @table @option
  176. @item STEREO_MODE=@var{mode}
  177. Stereo 3D video layout of two views in a single video track
  178. @table @option
  179. @item mono
  180. video is not stereo
  181. @item left_right
  182. Both views are arranged side by side, Left-eye view is on the left
  183. @item bottom_top
  184. Both views are arranged in top-bottom orientation, Left-eye view is at bottom
  185. @item top_bottom
  186. Both views are arranged in top-bottom orientation, Left-eye view is on top
  187. @item checkerboard_rl
  188. Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
  189. @item checkerboard_lr
  190. Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
  191. @item row_interleaved_rl
  192. Each view is constituted by a row based interleaving, Right-eye view is first row
  193. @item row_interleaved_lr
  194. Each view is constituted by a row based interleaving, Left-eye view is first row
  195. @item col_interleaved_rl
  196. Both views are arranged in a column based interleaving manner, Right-eye view is first column
  197. @item col_interleaved_lr
  198. Both views are arranged in a column based interleaving manner, Left-eye view is first column
  199. @item anaglyph_cyan_red
  200. All frames are in anaglyph format viewable through red-cyan filters
  201. @item right_left
  202. Both views are arranged side by side, Right-eye view is on the left
  203. @item anaglyph_green_magenta
  204. All frames are in anaglyph format viewable through green-magenta filters
  205. @item block_lr
  206. Both eyes laced in one Block, Left-eye view is first
  207. @item block_rl
  208. Both eyes laced in one Block, Right-eye view is first
  209. @end table
  210. @end table
  211. For example a 3D WebM clip can be created using the following command line:
  212. @example
  213. avconv -i sample_left_right_clip.mpg -an -c:v libvpx -metadata STEREO_MODE=left_right -y stereo_clip.webm
  214. @end example
  215. This muxer supports the following options:
  216. @table @option
  217. @item reserve_index_space
  218. By default, this muxer writes the index for seeking (called cues in Matroska
  219. terms) at the end of the file, because it cannot know in advance how much space
  220. to leave for the index at the beginning of the file. However for some use cases
  221. -- e.g. streaming where seeking is possible but slow -- it is useful to put the
  222. index at the beginning of the file.
  223. If this option is set to a non-zero value, the muxer will reserve a given amount
  224. of space in the file header and then try to write the cues there when the muxing
  225. finishes. If the available space does not suffice, muxing will fail. A safe size
  226. for most use cases should be about 50kB per hour of video.
  227. Note that cues are only written if the output is seekable and this option will
  228. have no effect if it is not.
  229. @end table
  230. @section mov, mp4, ismv
  231. The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
  232. file has all the metadata about all packets stored in one location
  233. (written at the end of the file, it can be moved to the start for
  234. better playback using the @command{qt-faststart} tool). A fragmented
  235. file consists of a number of fragments, where packets and metadata
  236. about these packets are stored together. Writing a fragmented
  237. file has the advantage that the file is decodable even if the
  238. writing is interrupted (while a normal MOV/MP4 is undecodable if
  239. it is not properly finished), and it requires less memory when writing
  240. very long files (since writing normal MOV/MP4 files stores info about
  241. every single packet in memory until the file is closed). The downside
  242. is that it is less compatible with other applications.
  243. Fragmentation is enabled by setting one of the AVOptions that define
  244. how to cut the file into fragments:
  245. @table @option
  246. @item -movflags frag_keyframe
  247. Start a new fragment at each video keyframe.
  248. @item -frag_duration @var{duration}
  249. Create fragments that are @var{duration} microseconds long.
  250. @item -frag_size @var{size}
  251. Create fragments that contain up to @var{size} bytes of payload data.
  252. @item -movflags frag_custom
  253. Allow the caller to manually choose when to cut fragments, by
  254. calling @code{av_write_frame(ctx, NULL)} to write a fragment with
  255. the packets written so far. (This is only useful with other
  256. applications integrating libavformat, not from @command{avconv}.)
  257. @item -min_frag_duration @var{duration}
  258. Don't create fragments that are shorter than @var{duration} microseconds long.
  259. @end table
  260. If more than one condition is specified, fragments are cut when
  261. one of the specified conditions is fulfilled. The exception to this is
  262. @code{-min_frag_duration}, which has to be fulfilled for any of the other
  263. conditions to apply.
  264. Additionally, the way the output file is written can be adjusted
  265. through a few other options:
  266. @table @option
  267. @item -movflags empty_moov
  268. Write an initial moov atom directly at the start of the file, without
  269. describing any samples in it. Generally, an mdat/moov pair is written
  270. at the start of the file, as a normal MOV/MP4 file, containing only
  271. a short portion of the file. With this option set, there is no initial
  272. mdat atom, and the moov atom only describes the tracks but has
  273. a zero duration.
  274. This option is implicitly set when writing ismv (Smooth Streaming) files.
  275. @item -movflags separate_moof
  276. Write a separate moof (movie fragment) atom for each track. Normally,
  277. packets for all tracks are written in a moof atom (which is slightly
  278. more efficient), but with this option set, the muxer writes one moof/mdat
  279. pair for each track, making it easier to separate tracks.
  280. This option is implicitly set when writing ismv (Smooth Streaming) files.
  281. @item -movflags faststart
  282. Run a second pass moving the index (moov atom) to the beginning of the file.
  283. This operation can take a while, and will not work in various situations such
  284. as fragmented output, thus it is not enabled by default.
  285. @item -movflags disable_chpl
  286. Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
  287. and a QuickTime chapter track are written to the file. With this option
  288. set, only the QuickTime chapter track will be written. Nero chapters can
  289. cause failures when the file is reprocessed with certain tagging programs.
  290. @item -movflags omit_tfhd_offset
  291. Do not write any absolute base_data_offset in tfhd atoms. This avoids
  292. tying fragments to absolute byte positions in the file/streams.
  293. @item -movflags default_base_moof
  294. Similarly to the omit_tfhd_offset, this flag avoids writing the
  295. absolute base_data_offset field in tfhd atoms, but does so by using
  296. the new default-base-is-moof flag instead. This flag is new from
  297. 14496-12:2012. This may make the fragments easier to parse in certain
  298. circumstances (avoiding basing track fragment location calculations
  299. on the implicit end of the previous track fragment).
  300. @end table
  301. Smooth Streaming content can be pushed in real time to a publishing
  302. point on IIS with this muxer. Example:
  303. @example
  304. avconv -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
  305. @end example
  306. @section mp3
  307. The MP3 muxer writes a raw MP3 stream with the following optional features:
  308. @itemize @bullet
  309. @item
  310. An ID3v2 metadata header at the beginning (enabled by default). Versions 2.3 and
  311. 2.4 are supported, the @code{id3v2_version} private option controls which one is
  312. used (3 or 4). Setting @code{id3v2_version} to 0 disables the ID3v2 header
  313. completely.
  314. The muxer supports writing attached pictures (APIC frames) to the ID3v2 header.
  315. The pictures are supplied to the muxer in form of a video stream with a single
  316. packet. There can be any number of those streams, each will correspond to a
  317. single APIC frame. The stream metadata tags @var{title} and @var{comment} map
  318. to APIC @var{description} and @var{picture type} respectively. See
  319. @url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
  320. Note that the APIC frames must be written at the beginning, so the muxer will
  321. buffer the audio frames until it gets all the pictures. It is therefore advised
  322. to provide the pictures as soon as possible to avoid excessive buffering.
  323. @item
  324. A Xing/LAME frame right after the ID3v2 header (if present). It is enabled by
  325. default, but will be written only if the output is seekable. The
  326. @code{write_xing} private option can be used to disable it. The frame contains
  327. various information that may be useful to the decoder, like the audio duration
  328. or encoder delay.
  329. @item
  330. A legacy ID3v1 tag at the end of the file (disabled by default). It may be
  331. enabled with the @code{write_id3v1} private option, but as its capabilities are
  332. very limited, its usage is not recommended.
  333. @end itemize
  334. Examples:
  335. Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
  336. @example
  337. avconv -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
  338. @end example
  339. Attach a picture to an mp3:
  340. @example
  341. avconv -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"
  342. -metadata:s:v comment="Cover (Front)" out.mp3
  343. @end example
  344. Write a "clean" MP3 without any extra features:
  345. @example
  346. avconv -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
  347. @end example
  348. @section mpegts
  349. MPEG transport stream muxer.
  350. This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
  351. The muxer options are:
  352. @table @option
  353. @item -mpegts_original_network_id @var{number}
  354. Set the original_network_id (default 0x0001). This is unique identifier
  355. of a network in DVB. Its main use is in the unique identification of a
  356. service through the path Original_Network_ID, Transport_Stream_ID.
  357. @item -mpegts_transport_stream_id @var{number}
  358. Set the transport_stream_id (default 0x0001). This identifies a
  359. transponder in DVB.
  360. @item -mpegts_service_id @var{number}
  361. Set the service_id (default 0x0001) also known as program in DVB.
  362. @item -mpegts_pmt_start_pid @var{number}
  363. Set the first PID for PMT (default 0x1000, max 0x1f00).
  364. @item -mpegts_start_pid @var{number}
  365. Set the first PID for data packets (default 0x0100, max 0x0f00).
  366. @item -muxrate @var{number}
  367. Set a constant muxrate (default VBR).
  368. @item -pcr_period @var{numer}
  369. Override the default PCR retransmission time (default 20ms), ignored
  370. if variable muxrate is selected.
  371. @end table
  372. The recognized metadata settings in mpegts muxer are @code{service_provider}
  373. and @code{service_name}. If they are not set the default for
  374. @code{service_provider} is "Libav" and the default for
  375. @code{service_name} is "Service01".
  376. @example
  377. avconv -i file.mpg -c copy \
  378. -mpegts_original_network_id 0x1122 \
  379. -mpegts_transport_stream_id 0x3344 \
  380. -mpegts_service_id 0x5566 \
  381. -mpegts_pmt_start_pid 0x1500 \
  382. -mpegts_start_pid 0x150 \
  383. -metadata service_provider="Some provider" \
  384. -metadata service_name="Some Channel" \
  385. -y out.ts
  386. @end example
  387. @section null
  388. Null muxer.
  389. This muxer does not generate any output file, it is mainly useful for
  390. testing or benchmarking purposes.
  391. For example to benchmark decoding with @command{avconv} you can use the
  392. command:
  393. @example
  394. avconv -benchmark -i INPUT -f null out.null
  395. @end example
  396. Note that the above command does not read or write the @file{out.null}
  397. file, but specifying the output file is required by the @command{avconv}
  398. syntax.
  399. Alternatively you can write the command as:
  400. @example
  401. avconv -benchmark -i INPUT -f null -
  402. @end example
  403. @section nut
  404. @table @option
  405. @item -syncpoints @var{flags}
  406. Change the syncpoint usage in nut:
  407. @table @option
  408. @item @var{default} use the normal low-overhead seeking aids.
  409. @item @var{none} do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;
  410. @item @var{timestamped} extend the syncpoint with a wallclock field.
  411. @end table
  412. The @var{none} and @var{timestamped} flags are experimental.
  413. @end table
  414. @example
  415. avconv -i INPUT -f_strict experimental -syncpoints none - | processor
  416. @end example
  417. @section ogg
  418. Ogg container muxer.
  419. @table @option
  420. @item -page_duration @var{duration}
  421. Preferred page duration, in microseconds. The muxer will attempt to create
  422. pages that are approximately @var{duration} microseconds long. This allows the
  423. user to compromise between seek granularity and container overhead. The default
  424. is 1 second. A value of 0 will fill all segments, making pages as large as
  425. possible. A value of 1 will effectively use 1 packet-per-page in most
  426. situations, giving a small seek granularity at the cost of additional container
  427. overhead.
  428. @item -serial_offset @var{value}
  429. Serial value from which to set the streams serial number.
  430. Setting it to different and sufficiently large values ensures that the produced
  431. ogg files can be safely chained.
  432. @end table
  433. @section segment
  434. Basic stream segmenter.
  435. The segmenter muxer outputs streams to a number of separate files of nearly
  436. fixed duration. Output filename pattern can be set in a fashion similar to
  437. @ref{image2}.
  438. Every segment starts with a video keyframe, if a video stream is present.
  439. The segment muxer works best with a single constant frame rate video.
  440. Optionally it can generate a flat list of the created segments, one segment
  441. per line.
  442. @table @option
  443. @item segment_format @var{format}
  444. Override the inner container format, by default it is guessed by the filename
  445. extension.
  446. @item segment_time @var{t}
  447. Set segment duration to @var{t} seconds.
  448. @item segment_list @var{name}
  449. Generate also a listfile named @var{name}.
  450. @item segment_list_type @var{type}
  451. Select the listing format.
  452. @table @option
  453. @item @var{flat} use a simple flat list of entries.
  454. @item @var{hls} use a m3u8-like structure.
  455. @end table
  456. @item segment_list_size @var{size}
  457. Overwrite the listfile once it reaches @var{size} entries.
  458. @item segment_list_entry_prefix @var{prefix}
  459. Prepend @var{prefix} to each entry. Useful to generate absolute paths.
  460. @item segment_wrap @var{limit}
  461. Wrap around segment index once it reaches @var{limit}.
  462. @end table
  463. @example
  464. avconv -i in.mkv -c copy -map 0 -f segment -list out.list out%03d.nut
  465. @end example
  466. @c man end MUXERS