You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

817 lines
23KB

  1. @chapter Protocols
  2. @c man begin PROTOCOLS
  3. Protocols are configured elements in Libav which allow to access
  4. resources which require the use of a particular protocol.
  5. When you configure your Libav build, all the supported protocols are
  6. enabled by default. You can list all available ones using the
  7. configure option "--list-protocols".
  8. You can disable all the protocols using the configure option
  9. "--disable-protocols", and selectively enable a protocol using the
  10. option "--enable-protocol=@var{PROTOCOL}", or you can disable a
  11. particular protocol using the option
  12. "--disable-protocol=@var{PROTOCOL}".
  13. The option "-protocols" of the av* tools will display the list of
  14. supported protocols.
  15. All protocols accept the following options:
  16. @table @option
  17. @item rw_timeout
  18. Maximum time to wait for (network) read/write operations to complete,
  19. in microseconds.
  20. @end table
  21. A description of the currently available protocols follows.
  22. @section concat
  23. Physical concatenation protocol.
  24. Allow to read and seek from many resource in sequence as if they were
  25. a unique resource.
  26. A URL accepted by this protocol has the syntax:
  27. @example
  28. concat:@var{URL1}|@var{URL2}|...|@var{URLN}
  29. @end example
  30. where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
  31. resource to be concatenated, each one possibly specifying a distinct
  32. protocol.
  33. For example to read a sequence of files @file{split1.mpeg},
  34. @file{split2.mpeg}, @file{split3.mpeg} with @command{avplay} use the
  35. command:
  36. @example
  37. avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
  38. @end example
  39. Note that you may need to escape the character "|" which is special for
  40. many shells.
  41. @section file
  42. File access protocol.
  43. Allow to read from or read to a file.
  44. For example to read from a file @file{input.mpeg} with @command{avconv}
  45. use the command:
  46. @example
  47. avconv -i file:input.mpeg output.mpeg
  48. @end example
  49. The av* tools default to the file protocol, that is a resource
  50. specified with the name "FILE.mpeg" is interpreted as the URL
  51. "file:FILE.mpeg".
  52. This protocol accepts the following options:
  53. @table @option
  54. @item follow
  55. If set to 1, the protocol will retry reading at the end of the file, allowing
  56. reading files that still are being written. In order for this to terminate,
  57. you either need to use the rw_timeout option, or use the interrupt callback
  58. (for API users).
  59. @end table
  60. @section gopher
  61. Gopher protocol.
  62. @section hls
  63. Read Apple HTTP Live Streaming compliant segmented stream as
  64. a uniform one. The M3U8 playlists describing the segments can be
  65. remote HTTP resources or local files, accessed using the standard
  66. file protocol.
  67. The nested protocol is declared by specifying
  68. "+@var{proto}" after the hls URI scheme name, where @var{proto}
  69. is either "file" or "http".
  70. @example
  71. hls+http://host/path/to/remote/resource.m3u8
  72. hls+file://path/to/local/resource.m3u8
  73. @end example
  74. Using this protocol is discouraged - the hls demuxer should work
  75. just as well (if not, please report the issues) and is more complete.
  76. To use the hls demuxer instead, simply use the direct URLs to the
  77. m3u8 files.
  78. @section http
  79. HTTP (Hyper Text Transfer Protocol).
  80. This protocol accepts the following options:
  81. @table @option
  82. @item chunked_post
  83. If set to 1 use chunked Transfer-Encoding for posts, default is 1.
  84. @item content_type
  85. Set a specific content type for the POST messages.
  86. @item headers
  87. Set custom HTTP headers, can override built in default headers. The
  88. value must be a string encoding the headers.
  89. @item multiple_requests
  90. Use persistent connections if set to 1, default is 0.
  91. @item post_data
  92. Set custom HTTP post data.
  93. @item user_agent
  94. Override the User-Agent header. If not specified a string of the form
  95. "Lavf/<version>" will be used.
  96. @item mime_type
  97. Export the MIME type.
  98. @item icy
  99. If set to 1 request ICY (SHOUTcast) metadata from the server. If the server
  100. supports this, the metadata has to be retrieved by the application by reading
  101. the @option{icy_metadata_headers} and @option{icy_metadata_packet} options.
  102. The default is 1.
  103. @item icy_metadata_headers
  104. If the server supports ICY metadata, this contains the ICY-specific HTTP reply
  105. headers, separated by newline characters.
  106. @item icy_metadata_packet
  107. If the server supports ICY metadata, and @option{icy} was set to 1, this
  108. contains the last non-empty metadata packet sent by the server. It should be
  109. polled in regular intervals by applications interested in mid-stream metadata
  110. updates.
  111. @item offset
  112. Set initial byte offset.
  113. @item end_offset
  114. Try to limit the request to bytes preceding this offset.
  115. @end table
  116. @section Icecast
  117. Icecast (stream to Icecast servers)
  118. This protocol accepts the following options:
  119. @table @option
  120. @item ice_genre
  121. Set the stream genre.
  122. @item ice_name
  123. Set the stream name.
  124. @item ice_description
  125. Set the stream description.
  126. @item ice_url
  127. Set the stream website URL.
  128. @item ice_public
  129. Set if the stream should be public or not.
  130. The default is 0 (not public).
  131. @item user_agent
  132. Override the User-Agent header. If not specified a string of the form
  133. "Lavf/<version>" will be used.
  134. @item password
  135. Set the Icecast mountpoint password.
  136. @item content_type
  137. Set the stream content type. This must be set if it is different from
  138. audio/mpeg.
  139. @item legacy_icecast
  140. This enables support for Icecast versions < 2.4.0, that do not support the
  141. HTTP PUT method but the SOURCE method.
  142. @end table
  143. @section mmst
  144. MMS (Microsoft Media Server) protocol over TCP.
  145. @section mmsh
  146. MMS (Microsoft Media Server) protocol over HTTP.
  147. The required syntax is:
  148. @example
  149. mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
  150. @end example
  151. @section md5
  152. MD5 output protocol.
  153. Computes the MD5 hash of the data to be written, and on close writes
  154. this to the designated output or stdout if none is specified. It can
  155. be used to test muxers without writing an actual file.
  156. Some examples follow.
  157. @example
  158. # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
  159. avconv -i input.flv -f avi -y md5:output.avi.md5
  160. # Write the MD5 hash of the encoded AVI file to stdout.
  161. avconv -i input.flv -f avi -y md5:
  162. @end example
  163. Note that some formats (typically MOV) require the output protocol to
  164. be seekable, so they will fail with the MD5 output protocol.
  165. @section pipe
  166. UNIX pipe access protocol.
  167. Allow to read and write from UNIX pipes.
  168. The accepted syntax is:
  169. @example
  170. pipe:[@var{number}]
  171. @end example
  172. @var{number} is the number corresponding to the file descriptor of the
  173. pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
  174. is not specified, by default the stdout file descriptor will be used
  175. for writing, stdin for reading.
  176. For example to read from stdin with @command{avconv}:
  177. @example
  178. cat test.wav | avconv -i pipe:0
  179. # ...this is the same as...
  180. cat test.wav | avconv -i pipe:
  181. @end example
  182. For writing to stdout with @command{avconv}:
  183. @example
  184. avconv -i test.wav -f avi pipe:1 | cat > test.avi
  185. # ...this is the same as...
  186. avconv -i test.wav -f avi pipe: | cat > test.avi
  187. @end example
  188. Note that some formats (typically MOV), require the output protocol to
  189. be seekable, so they will fail with the pipe output protocol.
  190. @section rtmp
  191. Real-Time Messaging Protocol.
  192. The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
  193. content across a TCP/IP network.
  194. The required syntax is:
  195. @example
  196. rtmp://[@var{username}:@var{password}@@]@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
  197. @end example
  198. The accepted parameters are:
  199. @table @option
  200. @item username
  201. An optional username (mostly for publishing).
  202. @item password
  203. An optional password (mostly for publishing).
  204. @item server
  205. The address of the RTMP server.
  206. @item port
  207. The number of the TCP port to use (by default is 1935).
  208. @item app
  209. It is the name of the application to access. It usually corresponds to
  210. the path where the application is installed on the RTMP server
  211. (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
  212. the value parsed from the URI through the @code{rtmp_app} option, too.
  213. @item playpath
  214. It is the path or name of the resource to play with reference to the
  215. application specified in @var{app}, may be prefixed by "mp4:". You
  216. can override the value parsed from the URI through the @code{rtmp_playpath}
  217. option, too.
  218. @item listen
  219. Act as a server, listening for an incoming connection.
  220. @item timeout
  221. Maximum time to wait for the incoming connection. Implies listen.
  222. @end table
  223. Additionally, the following parameters can be set via command line options
  224. (or in code via @code{AVOption}s):
  225. @table @option
  226. @item rtmp_app
  227. Name of application to connect on the RTMP server. This option
  228. overrides the parameter specified in the URI.
  229. @item rtmp_buffer
  230. Set the client buffer time in milliseconds. The default is 3000.
  231. @item rtmp_conn
  232. Extra arbitrary AMF connection parameters, parsed from a string,
  233. e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
  234. Each value is prefixed by a single character denoting the type,
  235. B for Boolean, N for number, S for string, O for object, or Z for null,
  236. followed by a colon. For Booleans the data must be either 0 or 1 for
  237. FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
  238. 1 to end or begin an object, respectively. Data items in subobjects may
  239. be named, by prefixing the type with 'N' and specifying the name before
  240. the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
  241. times to construct arbitrary AMF sequences.
  242. @item rtmp_flashver
  243. Version of the Flash plugin used to run the SWF player. The default
  244. is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible;
  245. <libavformat version>).)
  246. @item rtmp_flush_interval
  247. Number of packets flushed in the same request (RTMPT only). The default
  248. is 10.
  249. @item rtmp_live
  250. Specify that the media is a live stream. No resuming or seeking in
  251. live streams is possible. The default value is @code{any}, which means the
  252. subscriber first tries to play the live stream specified in the
  253. playpath. If a live stream of that name is not found, it plays the
  254. recorded stream. The other possible values are @code{live} and
  255. @code{recorded}.
  256. @item rtmp_pageurl
  257. URL of the web page in which the media was embedded. By default no
  258. value will be sent.
  259. @item rtmp_playpath
  260. Stream identifier to play or to publish. This option overrides the
  261. parameter specified in the URI.
  262. @item rtmp_subscribe
  263. Name of live stream to subscribe to. By default no value will be sent.
  264. It is only sent if the option is specified or if rtmp_live
  265. is set to live.
  266. @item rtmp_swfhash
  267. SHA256 hash of the decompressed SWF file (32 bytes).
  268. @item rtmp_swfsize
  269. Size of the decompressed SWF file, required for SWFVerification.
  270. @item rtmp_swfurl
  271. URL of the SWF player for the media. By default no value will be sent.
  272. @item rtmp_swfverify
  273. URL to player swf file, compute hash/size automatically.
  274. @item rtmp_tcurl
  275. URL of the target stream. Defaults to proto://host[:port]/app.
  276. @end table
  277. For example to read with @command{avplay} a multimedia resource named
  278. "sample" from the application "vod" from an RTMP server "myserver":
  279. @example
  280. avplay rtmp://myserver/vod/sample
  281. @end example
  282. To publish to a password protected server, passing the playpath and
  283. app names separately:
  284. @example
  285. avconv -re -i <input> -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@@myserver/
  286. @end example
  287. @section rtmpe
  288. Encrypted Real-Time Messaging Protocol.
  289. The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
  290. streaming multimedia content within standard cryptographic primitives,
  291. consisting of Diffie-Hellman key exchange and HMACSHA256, generating
  292. a pair of RC4 keys.
  293. @section rtmps
  294. Real-Time Messaging Protocol over a secure SSL connection.
  295. The Real-Time Messaging Protocol (RTMPS) is used for streaming
  296. multimedia content across an encrypted connection.
  297. @section rtmpt
  298. Real-Time Messaging Protocol tunneled through HTTP.
  299. The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
  300. for streaming multimedia content within HTTP requests to traverse
  301. firewalls.
  302. @section rtmpte
  303. Encrypted Real-Time Messaging Protocol tunneled through HTTP.
  304. The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
  305. is used for streaming multimedia content within HTTP requests to traverse
  306. firewalls.
  307. @section rtmpts
  308. Real-Time Messaging Protocol tunneled through HTTPS.
  309. The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
  310. for streaming multimedia content within HTTPS requests to traverse
  311. firewalls.
  312. @section librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte
  313. Real-Time Messaging Protocol and its variants supported through
  314. librtmp.
  315. Requires the presence of the librtmp headers and library during
  316. configuration. You need to explicitly configure the build with
  317. "--enable-librtmp". If enabled this will replace the native RTMP
  318. protocol.
  319. This protocol provides most client functions and a few server
  320. functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
  321. encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
  322. variants of these encrypted types (RTMPTE, RTMPTS).
  323. The required syntax is:
  324. @example
  325. @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
  326. @end example
  327. where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
  328. "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
  329. @var{server}, @var{port}, @var{app} and @var{playpath} have the same
  330. meaning as specified for the RTMP native protocol.
  331. @var{options} contains a list of space-separated options of the form
  332. @var{key}=@var{val}.
  333. See the librtmp manual page (man 3 librtmp) for more information.
  334. For example, to stream a file in real-time to an RTMP server using
  335. @command{avconv}:
  336. @example
  337. avconv -re -i myfile -f flv rtmp://myserver/live/mystream
  338. @end example
  339. To play the same stream using @command{avplay}:
  340. @example
  341. avplay "rtmp://myserver/live/mystream live=1"
  342. @end example
  343. @section rtp
  344. Real-Time Protocol.
  345. @section rtsp
  346. RTSP is not technically a protocol handler in libavformat, it is a demuxer
  347. and muxer. The demuxer supports both normal RTSP (with data transferred
  348. over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
  349. data transferred over RDT).
  350. The muxer can be used to send a stream using RTSP ANNOUNCE to a server
  351. supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
  352. @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
  353. The required syntax for a RTSP url is:
  354. @example
  355. rtsp://@var{hostname}[:@var{port}]/@var{path}
  356. @end example
  357. The following options (set on the @command{avconv}/@command{avplay} command
  358. line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
  359. are supported:
  360. Flags for @code{rtsp_transport}:
  361. @table @option
  362. @item udp
  363. Use UDP as lower transport protocol.
  364. @item tcp
  365. Use TCP (interleaving within the RTSP control channel) as lower
  366. transport protocol.
  367. @item udp_multicast
  368. Use UDP multicast as lower transport protocol.
  369. @item http
  370. Use HTTP tunneling as lower transport protocol, which is useful for
  371. passing proxies.
  372. @end table
  373. Multiple lower transport protocols may be specified, in that case they are
  374. tried one at a time (if the setup of one fails, the next one is tried).
  375. For the muxer, only the @code{tcp} and @code{udp} options are supported.
  376. Flags for @code{rtsp_flags}:
  377. @table @option
  378. @item filter_src
  379. Accept packets only from negotiated peer address and port.
  380. @item listen
  381. Act as a server, listening for an incoming connection.
  382. @end table
  383. When receiving data over UDP, the demuxer tries to reorder received packets
  384. (since they may arrive out of order, or packets may get lost totally). This
  385. can be disabled by setting the maximum demuxing delay to zero (via
  386. the @code{max_delay} field of AVFormatContext).
  387. When watching multi-bitrate Real-RTSP streams with @command{avplay}, the
  388. streams to display can be chosen with @code{-vst} @var{n} and
  389. @code{-ast} @var{n} for video and audio respectively, and can be switched
  390. on the fly by pressing @code{v} and @code{a}.
  391. Example command lines:
  392. To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
  393. @example
  394. avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
  395. @end example
  396. To watch a stream tunneled over HTTP:
  397. @example
  398. avplay -rtsp_transport http rtsp://server/video.mp4
  399. @end example
  400. To send a stream in realtime to a RTSP server, for others to watch:
  401. @example
  402. avconv -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
  403. @end example
  404. To receive a stream in realtime:
  405. @example
  406. avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
  407. @end example
  408. @section sap
  409. Session Announcement Protocol (RFC 2974). This is not technically a
  410. protocol handler in libavformat, it is a muxer and demuxer.
  411. It is used for signalling of RTP streams, by announcing the SDP for the
  412. streams regularly on a separate port.
  413. @subsection Muxer
  414. The syntax for a SAP url given to the muxer is:
  415. @example
  416. sap://@var{destination}[:@var{port}][?@var{options}]
  417. @end example
  418. The RTP packets are sent to @var{destination} on port @var{port},
  419. or to port 5004 if no port is specified.
  420. @var{options} is a @code{&}-separated list. The following options
  421. are supported:
  422. @table @option
  423. @item announce_addr=@var{address}
  424. Specify the destination IP address for sending the announcements to.
  425. If omitted, the announcements are sent to the commonly used SAP
  426. announcement multicast address 224.2.127.254 (sap.mcast.net), or
  427. ff0e::2:7ffe if @var{destination} is an IPv6 address.
  428. @item announce_port=@var{port}
  429. Specify the port to send the announcements on, defaults to
  430. 9875 if not specified.
  431. @item ttl=@var{ttl}
  432. Specify the time to live value for the announcements and RTP packets,
  433. defaults to 255.
  434. @item same_port=@var{0|1}
  435. If set to 1, send all RTP streams on the same port pair. If zero (the
  436. default), all streams are sent on unique ports, with each stream on a
  437. port 2 numbers higher than the previous.
  438. VLC/Live555 requires this to be set to 1, to be able to receive the stream.
  439. The RTP stack in libavformat for receiving requires all streams to be sent
  440. on unique ports.
  441. @end table
  442. Example command lines follow.
  443. To broadcast a stream on the local subnet, for watching in VLC:
  444. @example
  445. avconv -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
  446. @end example
  447. Similarly, for watching in avplay:
  448. @example
  449. avconv -re -i @var{input} -f sap sap://224.0.0.255
  450. @end example
  451. And for watching in avplay, over IPv6:
  452. @example
  453. avconv -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
  454. @end example
  455. @subsection Demuxer
  456. The syntax for a SAP url given to the demuxer is:
  457. @example
  458. sap://[@var{address}][:@var{port}]
  459. @end example
  460. @var{address} is the multicast address to listen for announcements on,
  461. if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
  462. is the port that is listened on, 9875 if omitted.
  463. The demuxers listens for announcements on the given address and port.
  464. Once an announcement is received, it tries to receive that particular stream.
  465. Example command lines follow.
  466. To play back the first stream announced on the normal SAP multicast address:
  467. @example
  468. avplay sap://
  469. @end example
  470. To play back the first stream announced on one the default IPv6 SAP multicast address:
  471. @example
  472. avplay sap://[ff0e::2:7ffe]
  473. @end example
  474. @section tcp
  475. Transmission Control Protocol.
  476. The required syntax for a TCP url is:
  477. @example
  478. tcp://@var{hostname}:@var{port}[?@var{options}]
  479. @end example
  480. @table @option
  481. @item listen
  482. Listen for an incoming connection
  483. @example
  484. avconv -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
  485. avplay tcp://@var{hostname}:@var{port}
  486. @end example
  487. @end table
  488. @section tls
  489. Transport Layer Security (TLS) / Secure Sockets Layer (SSL)
  490. The required syntax for a TLS url is:
  491. @example
  492. tls://@var{hostname}:@var{port}
  493. @end example
  494. The following parameters can be set via command line options
  495. (or in code via @code{AVOption}s):
  496. @table @option
  497. @item ca_file
  498. A file containing certificate authority (CA) root certificates to treat
  499. as trusted. If the linked TLS library contains a default this might not
  500. need to be specified for verification to work, but not all libraries and
  501. setups have defaults built in.
  502. @item tls_verify=@var{1|0}
  503. If enabled, try to verify the peer that we are communicating with.
  504. Note, if using OpenSSL, this currently only makes sure that the
  505. peer certificate is signed by one of the root certificates in the CA
  506. database, but it does not validate that the certificate actually
  507. matches the host name we are trying to connect to. (With GnuTLS,
  508. the host name is validated as well.)
  509. This is disabled by default since it requires a CA database to be
  510. provided by the caller in many cases.
  511. @item cert_file
  512. A file containing a certificate to use in the handshake with the peer.
  513. (When operating as server, in listen mode, this is more often required
  514. by the peer, while client certificates only are mandated in certain
  515. setups.)
  516. @item key_file
  517. A file containing the private key for the certificate.
  518. @item listen=@var{1|0}
  519. If enabled, listen for connections on the provided port, and assume
  520. the server role in the handshake instead of the client role.
  521. @end table
  522. @section udp
  523. User Datagram Protocol.
  524. The required syntax for a UDP url is:
  525. @example
  526. udp://@var{hostname}:@var{port}[?@var{options}]
  527. @end example
  528. @var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
  529. Follow the list of supported options.
  530. @table @option
  531. @item buffer_size=@var{size}
  532. set the UDP buffer size in bytes
  533. @item localport=@var{port}
  534. override the local UDP port to bind with
  535. @item localaddr=@var{addr}
  536. Choose the local IP address. This is useful e.g. if sending multicast
  537. and the host has multiple interfaces, where the user can choose
  538. which interface to send on by specifying the IP address of that interface.
  539. @item pkt_size=@var{size}
  540. set the size in bytes of UDP packets
  541. @item reuse=@var{1|0}
  542. explicitly allow or disallow reusing UDP sockets
  543. @item ttl=@var{ttl}
  544. set the time to live value (for multicast only)
  545. @item connect=@var{1|0}
  546. Initialize the UDP socket with @code{connect()}. In this case, the
  547. destination address can't be changed with ff_udp_set_remote_url later.
  548. If the destination address isn't known at the start, this option can
  549. be specified in ff_udp_set_remote_url, too.
  550. This allows finding out the source address for the packets with getsockname,
  551. and makes writes return with AVERROR(ECONNREFUSED) if "destination
  552. unreachable" is received.
  553. For receiving, this gives the benefit of only receiving packets from
  554. the specified peer address/port.
  555. @item sources=@var{address}[,@var{address}]
  556. Only receive packets sent to the multicast group from one of the
  557. specified sender IP addresses.
  558. @item block=@var{address}[,@var{address}]
  559. Ignore packets sent to the multicast group from the specified
  560. sender IP addresses.
  561. @end table
  562. Some usage examples of the udp protocol with @command{avconv} follow.
  563. To stream over UDP to a remote endpoint:
  564. @example
  565. avconv -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
  566. @end example
  567. To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
  568. @example
  569. avconv -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
  570. @end example
  571. To receive over UDP from a remote endpoint:
  572. @example
  573. avconv -i udp://[@var{multicast-address}]:@var{port}
  574. @end example
  575. @section unix
  576. Unix local socket
  577. The required syntax for a Unix socket URL is:
  578. @example
  579. unix://@var{filepath}
  580. @end example
  581. The following parameters can be set via command line options
  582. (or in code via @code{AVOption}s):
  583. @table @option
  584. @item timeout
  585. Timeout in ms.
  586. @item listen
  587. Create the Unix socket in listening mode.
  588. @end table
  589. @c man end PROTOCOLS