Full width text is really difficult to read, this makes it more
more legible on larger (widescreen) screens. It also means we aren't
inventing our own container instead of using the bootstrap one.
Signed-off-by: Josh de Kock <josh@itanimul.li>
This was not observed earlier because the only syntax element which
it normally misses with the current setup is slice_qp_delta, but that
is always going to be zero (in IDR frames QP isn't varied on the
slice) which will always exp-golomb code as a single 1 bit. The
immediately following part is the byte alignment, which is always a 1
bit followed by 0s which are ignored, so as long as the bitstream is
never aligned at that point we will never notice because the only
difference is that an ignored bit is a 1 instead of a 0.
Errors during decoding are currently considered non-fatal and do not
terminate transcoding, so even if parts of the data are corrupted, the
rest may be decodable.
However, that should apply only to the actual decoding calls, not to the
failures elsewhere (e.g. configuring filters).
The filtergraph's existence is used in several places to mean that the
filtergraph is fully configured. This causes problems if it's allocated,
but the initialization fails (e.g. if a non-existent filter is
specified).
There is really no need for two aac wrappers, we already have
libfdk-aac which is better. Not to mention that faac doesn't
even support HEv1, or HEv2. It's also under a license which is
unusable for distribution, so it would only be useful to people
who will compile their own ffmpeg, only use it themselves (which
at that point should just use fdk-aac).
Signed-off-by: Josh de Kock <josh@itanimul.li>
This is a bit messy, mainly due to timestamp handling.
decode_video() relied on the fact that it could set dts on a flush/drain
packet. This is not possible with the old API, and won't be. (I think
doing this was very questionable with the old API. Flush packets should
not contain any information; they just cause a FIFO to be emptied.) This
is replaced with checking the best_effort_timestamp for AV_NOPTS_VALUE,
and using the suggested DTS in the drain case.
The modified tests (fate-cavs and others) still fails due to dropping
the last frame. This happens because the timestamp of the last frame
goes backwards (ffprobe -show_frames shows the same thing). I suspect
that this "worked" due to the best effort timestamp logic picking the
DTS over the decreasing PTS. Since this logic is in libavcodec (where
it probably shouldn't be), this can't be easily fixed. The timestamps
of the cavs samples are weird anyway, so I chose not to fix it.
Another strange thing is the timestamp handling in the video path of
process_input_packet (after the decode_video() call). It looks like
the code to increase next_dts and next_pts should be run every time
a frame is decoded - but it's needed even if output is skipped.