You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

903 lines
25KB

  1. @chapter Protocols
  2. @c man begin PROTOCOLS
  3. Protocols are configured elements in FFmpeg which allow to access
  4. resources which require the use of a particular protocol.
  5. When you configure your FFmpeg build, all the supported protocols are
  6. enabled by default. You can list all available ones using the
  7. configure option "--list-protocols".
  8. You can disable all the protocols using the configure option
  9. "--disable-protocols", and selectively enable a protocol using the
  10. option "--enable-protocol=@var{PROTOCOL}", or you can disable a
  11. particular protocol using the option
  12. "--disable-protocol=@var{PROTOCOL}".
  13. The option "-protocols" of the ff* tools will display the list of
  14. supported protocols.
  15. A description of the currently available protocols follows.
  16. @section bluray
  17. Read BluRay playlist.
  18. The accepted options are:
  19. @table @option
  20. @item angle
  21. BluRay angle
  22. @item chapter
  23. Start chapter (1...N)
  24. @item playlist
  25. Playlist to read (BDMV/PLAYLIST/?????.mpls)
  26. @end table
  27. Examples:
  28. Read longest playlist from BluRay mounted to /mnt/bluray:
  29. @example
  30. bluray:/mnt/bluray
  31. @end example
  32. Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2:
  33. @example
  34. -playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray
  35. @end example
  36. @section cache
  37. Caching wrapper for input stream.
  38. Cache the input stream to temporary file. It brings seeking capability to live streams.
  39. @example
  40. cache:@var{URL}
  41. @end example
  42. @section concat
  43. Physical concatenation protocol.
  44. Allow to read and seek from many resource in sequence as if they were
  45. a unique resource.
  46. A URL accepted by this protocol has the syntax:
  47. @example
  48. concat:@var{URL1}|@var{URL2}|...|@var{URLN}
  49. @end example
  50. where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
  51. resource to be concatenated, each one possibly specifying a distinct
  52. protocol.
  53. For example to read a sequence of files @file{split1.mpeg},
  54. @file{split2.mpeg}, @file{split3.mpeg} with @command{ffplay} use the
  55. command:
  56. @example
  57. ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
  58. @end example
  59. Note that you may need to escape the character "|" which is special for
  60. many shells.
  61. @section crypto
  62. AES-encrypted stream reading protocol.
  63. The accepted options are:
  64. @table @option
  65. @item key
  66. Set the AES decryption key binary block from given hexadecimal representation.
  67. @item iv
  68. Set the AES decryption initialization vector binary block from given hexadecimal representation.
  69. @end table
  70. Accepted URL formats:
  71. @example
  72. crypto:@var{URL}
  73. crypto+@var{URL}
  74. @end example
  75. @section data
  76. Data in-line in the URI. See @url{http://en.wikipedia.org/wiki/Data_URI_scheme}.
  77. For example, to convert a GIF file given inline with @command{ffmpeg}:
  78. @example
  79. ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png
  80. @end example
  81. @section file
  82. File access protocol.
  83. Allow to read from or read to a file.
  84. For example to read from a file @file{input.mpeg} with @command{ffmpeg}
  85. use the command:
  86. @example
  87. ffmpeg -i file:input.mpeg output.mpeg
  88. @end example
  89. The ff* tools default to the file protocol, that is a resource
  90. specified with the name "FILE.mpeg" is interpreted as the URL
  91. "file:FILE.mpeg".
  92. @section ftp
  93. FTP (File Transfer Protocol).
  94. Allow to read from or write to remote resources using FTP protocol.
  95. Following syntax is required.
  96. @example
  97. ftp://[user[:password]@@]server[:port]/path/to/remote/resource.mpeg
  98. @end example
  99. This protocol accepts the following options.
  100. @table @option
  101. @item timeout
  102. Set timeout of socket I/O operations used by the underlying low level
  103. operation. By default it is set to -1, which means that the timeout is
  104. not specified.
  105. @item ftp-anonymous-password
  106. Password used when login as anonymous user. Typically an e-mail address
  107. should be used.
  108. @item ftp-write-seekable
  109. Control seekability of connection during encoding. If set to 1 the
  110. resource is supposed to be seekable, if set to 0 it is assumed not
  111. to be seekable. Default value is 0.
  112. @end table
  113. NOTE: Protocol can be used as output, but it is recommended to not do
  114. it, unless special care is taken (tests, customized server configuration
  115. etc.). Different FTP servers behave in different way during seek
  116. operation. ff* tools may produce incomplete content due to server limitations.
  117. @section gopher
  118. Gopher protocol.
  119. @section hls
  120. Read Apple HTTP Live Streaming compliant segmented stream as
  121. a uniform one. The M3U8 playlists describing the segments can be
  122. remote HTTP resources or local files, accessed using the standard
  123. file protocol.
  124. The nested protocol is declared by specifying
  125. "+@var{proto}" after the hls URI scheme name, where @var{proto}
  126. is either "file" or "http".
  127. @example
  128. hls+http://host/path/to/remote/resource.m3u8
  129. hls+file://path/to/local/resource.m3u8
  130. @end example
  131. Using this protocol is discouraged - the hls demuxer should work
  132. just as well (if not, please report the issues) and is more complete.
  133. To use the hls demuxer instead, simply use the direct URLs to the
  134. m3u8 files.
  135. @section http
  136. HTTP (Hyper Text Transfer Protocol).
  137. This protocol accepts the following options.
  138. @table @option
  139. @item seekable
  140. Control seekability of connection. If set to 1 the resource is
  141. supposed to be seekable, if set to 0 it is assumed not to be seekable,
  142. if set to -1 it will try to autodetect if it is seekable. Default
  143. value is -1.
  144. @item chunked_post
  145. If set to 1 use chunked transfer-encoding for posts, default is 1.
  146. @item headers
  147. Set custom HTTP headers, can override built in default headers. The
  148. value must be a string encoding the headers.
  149. @item content_type
  150. Force a content type.
  151. @item user-agent
  152. Override User-Agent header. If not specified the protocol will use a
  153. string describing the libavformat build.
  154. @item multiple_requests
  155. Use persistent connections if set to 1. By default it is 0.
  156. @item post_data
  157. Set custom HTTP post data.
  158. @item timeout
  159. Set timeout of socket I/O operations used by the underlying low level
  160. operation. By default it is set to -1, which means that the timeout is
  161. not specified.
  162. @item mime_type
  163. Set MIME type.
  164. @item cookies
  165. Set the cookies to be sent in future requests. The format of each cookie is the
  166. same as the value of a Set-Cookie HTTP response field. Multiple cookies can be
  167. delimited by a newline character.
  168. @end table
  169. @subsection HTTP Cookies
  170. Some HTTP requests will be denied unless cookie values are passed in with the
  171. request. The @option{cookies} option allows these cookies to be specified. At
  172. the very least, each cookie must specify a value along with a path and domain.
  173. HTTP requests that match both the domain and path will automatically include the
  174. cookie value in the HTTP Cookie header field. Multiple cookies can be delimited
  175. by a newline.
  176. The required syntax to play a stream specifying a cookie is:
  177. @example
  178. ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8
  179. @end example
  180. @section mmst
  181. MMS (Microsoft Media Server) protocol over TCP.
  182. @section mmsh
  183. MMS (Microsoft Media Server) protocol over HTTP.
  184. The required syntax is:
  185. @example
  186. mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
  187. @end example
  188. @section md5
  189. MD5 output protocol.
  190. Computes the MD5 hash of the data to be written, and on close writes
  191. this to the designated output or stdout if none is specified. It can
  192. be used to test muxers without writing an actual file.
  193. Some examples follow.
  194. @example
  195. # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
  196. ffmpeg -i input.flv -f avi -y md5:output.avi.md5
  197. # Write the MD5 hash of the encoded AVI file to stdout.
  198. ffmpeg -i input.flv -f avi -y md5:
  199. @end example
  200. Note that some formats (typically MOV) require the output protocol to
  201. be seekable, so they will fail with the MD5 output protocol.
  202. @section pipe
  203. UNIX pipe access protocol.
  204. Allow to read and write from UNIX pipes.
  205. The accepted syntax is:
  206. @example
  207. pipe:[@var{number}]
  208. @end example
  209. @var{number} is the number corresponding to the file descriptor of the
  210. pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
  211. is not specified, by default the stdout file descriptor will be used
  212. for writing, stdin for reading.
  213. For example to read from stdin with @command{ffmpeg}:
  214. @example
  215. cat test.wav | ffmpeg -i pipe:0
  216. # ...this is the same as...
  217. cat test.wav | ffmpeg -i pipe:
  218. @end example
  219. For writing to stdout with @command{ffmpeg}:
  220. @example
  221. ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
  222. # ...this is the same as...
  223. ffmpeg -i test.wav -f avi pipe: | cat > test.avi
  224. @end example
  225. Note that some formats (typically MOV), require the output protocol to
  226. be seekable, so they will fail with the pipe output protocol.
  227. @section rtmp
  228. Real-Time Messaging Protocol.
  229. The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
  230. content across a TCP/IP network.
  231. The required syntax is:
  232. @example
  233. rtmp://@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
  234. @end example
  235. The accepted parameters are:
  236. @table @option
  237. @item server
  238. The address of the RTMP server.
  239. @item port
  240. The number of the TCP port to use (by default is 1935).
  241. @item app
  242. It is the name of the application to access. It usually corresponds to
  243. the path where the application is installed on the RTMP server
  244. (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
  245. the value parsed from the URI through the @code{rtmp_app} option, too.
  246. @item playpath
  247. It is the path or name of the resource to play with reference to the
  248. application specified in @var{app}, may be prefixed by "mp4:". You
  249. can override the value parsed from the URI through the @code{rtmp_playpath}
  250. option, too.
  251. @item listen
  252. Act as a server, listening for an incoming connection.
  253. @item timeout
  254. Maximum time to wait for the incoming connection. Implies listen.
  255. @end table
  256. Additionally, the following parameters can be set via command line options
  257. (or in code via @code{AVOption}s):
  258. @table @option
  259. @item rtmp_app
  260. Name of application to connect on the RTMP server. This option
  261. overrides the parameter specified in the URI.
  262. @item rtmp_buffer
  263. Set the client buffer time in milliseconds. The default is 3000.
  264. @item rtmp_conn
  265. Extra arbitrary AMF connection parameters, parsed from a string,
  266. e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
  267. Each value is prefixed by a single character denoting the type,
  268. B for Boolean, N for number, S for string, O for object, or Z for null,
  269. followed by a colon. For Booleans the data must be either 0 or 1 for
  270. FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
  271. 1 to end or begin an object, respectively. Data items in subobjects may
  272. be named, by prefixing the type with 'N' and specifying the name before
  273. the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
  274. times to construct arbitrary AMF sequences.
  275. @item rtmp_flashver
  276. Version of the Flash plugin used to run the SWF player. The default
  277. is LNX 9,0,124,2.
  278. @item rtmp_flush_interval
  279. Number of packets flushed in the same request (RTMPT only). The default
  280. is 10.
  281. @item rtmp_live
  282. Specify that the media is a live stream. No resuming or seeking in
  283. live streams is possible. The default value is @code{any}, which means the
  284. subscriber first tries to play the live stream specified in the
  285. playpath. If a live stream of that name is not found, it plays the
  286. recorded stream. The other possible values are @code{live} and
  287. @code{recorded}.
  288. @item rtmp_pageurl
  289. URL of the web page in which the media was embedded. By default no
  290. value will be sent.
  291. @item rtmp_playpath
  292. Stream identifier to play or to publish. This option overrides the
  293. parameter specified in the URI.
  294. @item rtmp_subscribe
  295. Name of live stream to subscribe to. By default no value will be sent.
  296. It is only sent if the option is specified or if rtmp_live
  297. is set to live.
  298. @item rtmp_swfhash
  299. SHA256 hash of the decompressed SWF file (32 bytes).
  300. @item rtmp_swfsize
  301. Size of the decompressed SWF file, required for SWFVerification.
  302. @item rtmp_swfurl
  303. URL of the SWF player for the media. By default no value will be sent.
  304. @item rtmp_swfverify
  305. URL to player swf file, compute hash/size automatically.
  306. @item rtmp_tcurl
  307. URL of the target stream. Defaults to proto://host[:port]/app.
  308. @end table
  309. For example to read with @command{ffplay} a multimedia resource named
  310. "sample" from the application "vod" from an RTMP server "myserver":
  311. @example
  312. ffplay rtmp://myserver/vod/sample
  313. @end example
  314. @section rtmpe
  315. Encrypted Real-Time Messaging Protocol.
  316. The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
  317. streaming multimedia content within standard cryptographic primitives,
  318. consisting of Diffie-Hellman key exchange and HMACSHA256, generating
  319. a pair of RC4 keys.
  320. @section rtmps
  321. Real-Time Messaging Protocol over a secure SSL connection.
  322. The Real-Time Messaging Protocol (RTMPS) is used for streaming
  323. multimedia content across an encrypted connection.
  324. @section rtmpt
  325. Real-Time Messaging Protocol tunneled through HTTP.
  326. The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
  327. for streaming multimedia content within HTTP requests to traverse
  328. firewalls.
  329. @section rtmpte
  330. Encrypted Real-Time Messaging Protocol tunneled through HTTP.
  331. The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
  332. is used for streaming multimedia content within HTTP requests to traverse
  333. firewalls.
  334. @section rtmpts
  335. Real-Time Messaging Protocol tunneled through HTTPS.
  336. The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
  337. for streaming multimedia content within HTTPS requests to traverse
  338. firewalls.
  339. @section rtmp, rtmpe, rtmps, rtmpt, rtmpte
  340. Real-Time Messaging Protocol and its variants supported through
  341. librtmp.
  342. Requires the presence of the librtmp headers and library during
  343. configuration. You need to explicitly configure the build with
  344. "--enable-librtmp". If enabled this will replace the native RTMP
  345. protocol.
  346. This protocol provides most client functions and a few server
  347. functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
  348. encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
  349. variants of these encrypted types (RTMPTE, RTMPTS).
  350. The required syntax is:
  351. @example
  352. @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
  353. @end example
  354. where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
  355. "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
  356. @var{server}, @var{port}, @var{app} and @var{playpath} have the same
  357. meaning as specified for the RTMP native protocol.
  358. @var{options} contains a list of space-separated options of the form
  359. @var{key}=@var{val}.
  360. See the librtmp manual page (man 3 librtmp) for more information.
  361. For example, to stream a file in real-time to an RTMP server using
  362. @command{ffmpeg}:
  363. @example
  364. ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
  365. @end example
  366. To play the same stream using @command{ffplay}:
  367. @example
  368. ffplay "rtmp://myserver/live/mystream live=1"
  369. @end example
  370. @section rtp
  371. Real-Time Protocol.
  372. @section rtsp
  373. RTSP is not technically a protocol handler in libavformat, it is a demuxer
  374. and muxer. The demuxer supports both normal RTSP (with data transferred
  375. over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
  376. data transferred over RDT).
  377. The muxer can be used to send a stream using RTSP ANNOUNCE to a server
  378. supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
  379. @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
  380. The required syntax for a RTSP url is:
  381. @example
  382. rtsp://@var{hostname}[:@var{port}]/@var{path}
  383. @end example
  384. The following options (set on the @command{ffmpeg}/@command{ffplay} command
  385. line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
  386. are supported:
  387. Flags for @code{rtsp_transport}:
  388. @table @option
  389. @item udp
  390. Use UDP as lower transport protocol.
  391. @item tcp
  392. Use TCP (interleaving within the RTSP control channel) as lower
  393. transport protocol.
  394. @item udp_multicast
  395. Use UDP multicast as lower transport protocol.
  396. @item http
  397. Use HTTP tunneling as lower transport protocol, which is useful for
  398. passing proxies.
  399. @end table
  400. Multiple lower transport protocols may be specified, in that case they are
  401. tried one at a time (if the setup of one fails, the next one is tried).
  402. For the muxer, only the @code{tcp} and @code{udp} options are supported.
  403. Flags for @code{rtsp_flags}:
  404. @table @option
  405. @item filter_src
  406. Accept packets only from negotiated peer address and port.
  407. @item listen
  408. Act as a server, listening for an incoming connection.
  409. @end table
  410. When receiving data over UDP, the demuxer tries to reorder received packets
  411. (since they may arrive out of order, or packets may get lost totally). This
  412. can be disabled by setting the maximum demuxing delay to zero (via
  413. the @code{max_delay} field of AVFormatContext).
  414. When watching multi-bitrate Real-RTSP streams with @command{ffplay}, the
  415. streams to display can be chosen with @code{-vst} @var{n} and
  416. @code{-ast} @var{n} for video and audio respectively, and can be switched
  417. on the fly by pressing @code{v} and @code{a}.
  418. Example command lines:
  419. To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
  420. @example
  421. ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
  422. @end example
  423. To watch a stream tunneled over HTTP:
  424. @example
  425. ffplay -rtsp_transport http rtsp://server/video.mp4
  426. @end example
  427. To send a stream in realtime to a RTSP server, for others to watch:
  428. @example
  429. ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
  430. @end example
  431. To receive a stream in realtime:
  432. @example
  433. ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
  434. @end example
  435. @table @option
  436. @item stimeout
  437. Socket IO timeout in micro seconds.
  438. @end table
  439. @section sap
  440. Session Announcement Protocol (RFC 2974). This is not technically a
  441. protocol handler in libavformat, it is a muxer and demuxer.
  442. It is used for signalling of RTP streams, by announcing the SDP for the
  443. streams regularly on a separate port.
  444. @subsection Muxer
  445. The syntax for a SAP url given to the muxer is:
  446. @example
  447. sap://@var{destination}[:@var{port}][?@var{options}]
  448. @end example
  449. The RTP packets are sent to @var{destination} on port @var{port},
  450. or to port 5004 if no port is specified.
  451. @var{options} is a @code{&}-separated list. The following options
  452. are supported:
  453. @table @option
  454. @item announce_addr=@var{address}
  455. Specify the destination IP address for sending the announcements to.
  456. If omitted, the announcements are sent to the commonly used SAP
  457. announcement multicast address 224.2.127.254 (sap.mcast.net), or
  458. ff0e::2:7ffe if @var{destination} is an IPv6 address.
  459. @item announce_port=@var{port}
  460. Specify the port to send the announcements on, defaults to
  461. 9875 if not specified.
  462. @item ttl=@var{ttl}
  463. Specify the time to live value for the announcements and RTP packets,
  464. defaults to 255.
  465. @item same_port=@var{0|1}
  466. If set to 1, send all RTP streams on the same port pair. If zero (the
  467. default), all streams are sent on unique ports, with each stream on a
  468. port 2 numbers higher than the previous.
  469. VLC/Live555 requires this to be set to 1, to be able to receive the stream.
  470. The RTP stack in libavformat for receiving requires all streams to be sent
  471. on unique ports.
  472. @end table
  473. Example command lines follow.
  474. To broadcast a stream on the local subnet, for watching in VLC:
  475. @example
  476. ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
  477. @end example
  478. Similarly, for watching in @command{ffplay}:
  479. @example
  480. ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
  481. @end example
  482. And for watching in @command{ffplay}, over IPv6:
  483. @example
  484. ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
  485. @end example
  486. @subsection Demuxer
  487. The syntax for a SAP url given to the demuxer is:
  488. @example
  489. sap://[@var{address}][:@var{port}]
  490. @end example
  491. @var{address} is the multicast address to listen for announcements on,
  492. if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
  493. is the port that is listened on, 9875 if omitted.
  494. The demuxers listens for announcements on the given address and port.
  495. Once an announcement is received, it tries to receive that particular stream.
  496. Example command lines follow.
  497. To play back the first stream announced on the normal SAP multicast address:
  498. @example
  499. ffplay sap://
  500. @end example
  501. To play back the first stream announced on one the default IPv6 SAP multicast address:
  502. @example
  503. ffplay sap://[ff0e::2:7ffe]
  504. @end example
  505. @section sctp
  506. Stream Control Transmission Protocol.
  507. The accepted URL syntax is:
  508. @example
  509. sctp://@var{host}:@var{port}[?@var{options}]
  510. @end example
  511. The protocol accepts the following options:
  512. @table @option
  513. @item listen
  514. If set to any value, listen for an incoming connection. Outgoing connection is done by default.
  515. @item max_streams
  516. Set the maximum number of streams. By default no limit is set.
  517. @end table
  518. @section srtp
  519. Secure Real-time Transport Protocol.
  520. The accepted options are:
  521. @table @option
  522. @item srtp_in_suite
  523. @item srtp_out_suite
  524. Select input and output encoding suites.
  525. Supported values:
  526. @table @samp
  527. @item AES_CM_128_HMAC_SHA1_80
  528. @item SRTP_AES128_CM_HMAC_SHA1_80
  529. @item AES_CM_128_HMAC_SHA1_32
  530. @item SRTP_AES128_CM_HMAC_SHA1_32
  531. @end table
  532. @item srtp_in_params
  533. @item srtp_out_params
  534. Set input and output encoding parameters, which are expressed by a
  535. base64-encoded representation of a binary block. The first 16 bytes of
  536. this binary block are used as master key, the following 14 bytes are
  537. used as master salt.
  538. @end table
  539. @section tcp
  540. Trasmission Control Protocol.
  541. The required syntax for a TCP url is:
  542. @example
  543. tcp://@var{hostname}:@var{port}[?@var{options}]
  544. @end example
  545. @table @option
  546. @item listen
  547. Listen for an incoming connection
  548. @item timeout=@var{microseconds}
  549. In read mode: if no data arrived in more than this time interval, raise error.
  550. In write mode: if socket cannot be written in more than this time interval, raise error.
  551. This also sets timeout on TCP connection establishing.
  552. @example
  553. ffmpeg -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
  554. ffplay tcp://@var{hostname}:@var{port}
  555. @end example
  556. @end table
  557. @section tls
  558. Transport Layer Security/Secure Sockets Layer
  559. The required syntax for a TLS/SSL url is:
  560. @example
  561. tls://@var{hostname}:@var{port}[?@var{options}]
  562. @end example
  563. @table @option
  564. @item listen
  565. Act as a server, listening for an incoming connection.
  566. @item cafile=@var{filename}
  567. Certificate authority file. The file must be in OpenSSL PEM format.
  568. @item cert=@var{filename}
  569. Certificate file. The file must be in OpenSSL PEM format.
  570. @item key=@var{filename}
  571. Private key file.
  572. @item verify=@var{0|1}
  573. Verify the peer's certificate.
  574. @end table
  575. Example command lines:
  576. To create a TLS/SSL server that serves an input stream.
  577. @example
  578. ffmpeg -i @var{input} -f @var{format} tls://@var{hostname}:@var{port}?listen&cert=@var{server.crt}&key=@var{server.key}
  579. @end example
  580. To play back a stream from the TLS/SSL server using @command{ffplay}:
  581. @example
  582. ffplay tls://@var{hostname}:@var{port}
  583. @end example
  584. @section udp
  585. User Datagram Protocol.
  586. The required syntax for a UDP url is:
  587. @example
  588. udp://@var{hostname}:@var{port}[?@var{options}]
  589. @end example
  590. @var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
  591. In case threading is enabled on the system, a circular buffer is used
  592. to store the incoming data, which allows to reduce loss of data due to
  593. UDP socket buffer overruns. The @var{fifo_size} and
  594. @var{overrun_nonfatal} options are related to this buffer.
  595. The list of supported options follows.
  596. @table @option
  597. @item buffer_size=@var{size}
  598. Set the UDP socket buffer size in bytes. This is used both for the
  599. receiving and the sending buffer size.
  600. @item localport=@var{port}
  601. Override the local UDP port to bind with.
  602. @item localaddr=@var{addr}
  603. Choose the local IP address. This is useful e.g. if sending multicast
  604. and the host has multiple interfaces, where the user can choose
  605. which interface to send on by specifying the IP address of that interface.
  606. @item pkt_size=@var{size}
  607. Set the size in bytes of UDP packets.
  608. @item reuse=@var{1|0}
  609. Explicitly allow or disallow reusing UDP sockets.
  610. @item ttl=@var{ttl}
  611. Set the time to live value (for multicast only).
  612. @item connect=@var{1|0}
  613. Initialize the UDP socket with @code{connect()}. In this case, the
  614. destination address can't be changed with ff_udp_set_remote_url later.
  615. If the destination address isn't known at the start, this option can
  616. be specified in ff_udp_set_remote_url, too.
  617. This allows finding out the source address for the packets with getsockname,
  618. and makes writes return with AVERROR(ECONNREFUSED) if "destination
  619. unreachable" is received.
  620. For receiving, this gives the benefit of only receiving packets from
  621. the specified peer address/port.
  622. @item sources=@var{address}[,@var{address}]
  623. Only receive packets sent to the multicast group from one of the
  624. specified sender IP addresses.
  625. @item block=@var{address}[,@var{address}]
  626. Ignore packets sent to the multicast group from the specified
  627. sender IP addresses.
  628. @item fifo_size=@var{units}
  629. Set the UDP receiving circular buffer size, expressed as a number of
  630. packets with size of 188 bytes. If not specified defaults to 7*4096.
  631. @item overrun_nonfatal=@var{1|0}
  632. Survive in case of UDP receiving circular buffer overrun. Default
  633. value is 0.
  634. @item timeout=@var{microseconds}
  635. In read mode: if no data arrived in more than this time interval, raise error.
  636. @end table
  637. Some usage examples of the UDP protocol with @command{ffmpeg} follow.
  638. To stream over UDP to a remote endpoint:
  639. @example
  640. ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
  641. @end example
  642. To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
  643. @example
  644. ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
  645. @end example
  646. To receive over UDP from a remote endpoint:
  647. @example
  648. ffmpeg -i udp://[@var{multicast-address}]:@var{port}
  649. @end example
  650. @c man end PROTOCOLS