1. Workaround for decoders that fail to handle the END_OF_STREAM flag.
2. Revert processing of final output buffer if it's non-empty. This
introduced another bug (#596)
Reverts: b88012f51f
Issue: #417
Issue: #596
1. Remove requirement for TrackRenderer implementations to report
current position, unless they are time sources.
2. Expose whether renderers have media to play. The immediate benefit
of this is to solve the referenced GitHub issue, and also to only
display the appropriate Audio/Video/Text buttons in the demo app
for the media being played. This is also a natural step toward
multi-track support.
Github issue: #541
This makes it so that it's no longer necessary to specify the number
of downstream renderers to HlsSampleSource, FrameworkSampleSource and
ExtractorSampleSource, by forcing the downstream renderers to register
with the SampleSource instances in their constructors. This eliminates
a common source of subtle client bugs where the passed value is incorrect.
This also fixes a technical mistake where HlsChunkSource is fed
seekPositionUs=-1 when obtaining the first chunk. This is wrong,
but the usage of this variable within HlsChunkSource enforces that
the seek must stay within bounds, so we get away with it.
Issue: #385
The OMX component needs to be configured with a format that has a
MIME type of audio/raw. Remove Ac3PassthroughAudioTrackRenderer,
which is no longer used.
- Rather than returning a map, return a DrmInitData object,
with mapped and non-mapped implementations.
- Include a suitable mimeType to pass to the MediaDrm. Previously
we were incorrectly passing the mimeType of the samples,
where-as MediaDrm expects the container mimeType. Note that
it doesn't matter whether the mimeType starts with "video" or
"audio", hence using video mimeTypes everywhere.
It was possible for a codec input buffer to be filled with two frames' worth of
data, if seekTo was called after populating a buffer, if waitingForKeys was true
and seeking did not trigger a flush. This caused the CryptoInfo to be configured
as if the input buffer contained a large amount of reconfiguration data as
cleartext.
Move resetting waitingForKeys to flushCodec, so that we don't try to read the
next sample from the source until the first one has been consumed or discarded.
The complexity around not enabling the video renderer before it
has a valid surface is because MediaCodecTrackRenderer supports
a "discard" mode where it pulls through and discards samples
without a decoder. This mode means that if the demo app were to
enable the renderer before supplying the surface, the renderer
could discard the first few frames prior to getting the surface,
meaning video rendering wouldn't happen until the following sync
frame.
To get a handle on complexity, I think we're better off just removing
support for this mode, which nicely decouples how the demo app
handles surfaces v.s. how it handles enabling/disabling renderers.
* this fixes a bug when switching from HE-AAC 22050Hz to AAC 44100Hz (the AudioTrack was not reset and we were trying to send a bad number of bytes, triggering a "AudioTrack.write() called with invalid size" error)
* this also improves quality switches, making it almost seamless
Looking up a long in a HashSet<Long> auto boxes the long and leaves
it for the GC. As decodeOnly is relatively infrequent it's much
better to do a simple linear search in a List<Long>. That way
we can avoid boxing every incoming time stamp value. In the general
case this will be linear searching in an empty list, a very fast
operation.
Signed-off-by: Jonas Larsson <jonas@hallerud.se>
Propagate elapsedRealtimeUs to the video renderer. This allows
the renderer to calculate and adjust for the elapsed time since
the start of the current rendering loop. Typically this is <2ms,
but there situations where it can go higher (normally when the
video renderer ends up processing more than 1 output buffer in
a single loop).
Also made variable naming more consistent throughout the package.
This means that after a decoder flush, the renderer will avoid
feeding non-keyframes into the decoder until it has received and
fed the first keyframe. The decoder has no way of correctly
decoding non-keyframes that arrive before a keyframe.
Since we have a Format class as well, it's very confusing that
FormatHolder actually holds a MediaFormat. I think it's quite
likely that Format will need promoting into the root package as
part of the HLS work, which will make this even more confusing
(although it is possible that for HLS we'll define yet another
Format class, if it turns out we need significantly different
fields).
Note - I deliberately avoided renaming the formatHolder
args/params, because they're not particularly ambiguous and
because it introduces some ugly line breaks.
- Bring back requirement for the first video frame to be rendered
before isReady returns true, *unless* we've deduced that the
upstream source is serving multiple renderers.
- Ditto for requiring that the audio track has some buffered data.
- Add constants class. Currently housing a single lonely variable,
which is used generally throughout the library, and so no longer
nicely fits into a specific class.
- Rename a few other constants to add clear units.
- Made minor tweak to ExoPlayer documentation.
1. Use ints rather than longs.
2. Remove some counters that dont seem hugely useful.
3. Replace use of volatile with explicit method calls that
cause a memory barrier. This is a lot more efficient than
using volatile because it can be invoked only once per
doSomeWork.
- Make MediaCodecTrackRenderer.isReady more permissive.
This largely fixes#21
- Bring WebmExtractor closer to FragmentedMp4Extractor.
The two will probably be placed under a common interface
fairly soon, which will allow significant code
deduplication.
AudioTrack time will go out of sync if the decodeOnly flag
is set of arbitrary samples (as opposed to just those following
a seek). It's a pretty obscure case and it would be weird for
anyone to do it, but we should be robust against it anyway.