This allows implementation and injection of custom MediaCodecSelector
instances. By injecting a custom selector, it's possible for applications
to exert more control over which decoder(s) they instantiate. For example,
applications can force use of a software decoder.
GitHub Issue #938
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=110369810
Context:
- Currently, playback is significantly more juddery with it disabled,
particularly on AndroidTV.
- We should be able to do the "best" job of this internally, so injection
doesn't buy anything useful. If someone has a better implementation for
adjusting the frame release, they should improve the core library.
Remove MPEG TS stream filtering based on AudioCapabilities.
Pass AudioCapabilities to MediaCodecAudioTrackRenderer so it can choose between
passthrough/raw and decoding for AC-3 tracks.
- Generalize rendererEnabledFlags to be selected track indices through
ExoPlayerImpl/ExoPlayerImplInternal.
- Selecting an out-of-bound track index (e.g. -1) is equivalent to
disabling a renderer prior to the generalization.
- A prepared TrackRenderer that exposes 0 tracks is equivalent to a
TrackRenderer in the STATE_IGNORE state prior to the generalization.
Issue #514.
The new logic assumes that an input format change will be
followed by an output format change, but I think this is
pretty much guaranteed. If this weren't to happen then the
new pixel aspect ratio wont be picked up, but I think it's
extremely unlikely (it would require the format to stay
exactly the same except for the pixel aspect ratio, which
would be bizarre).
The OMX component needs to be configured with a format that has a
MIME type of audio/raw. Remove Ac3PassthroughAudioTrackRenderer,
which is no longer used.
The complexity around not enabling the video renderer before it
has a valid surface is because MediaCodecTrackRenderer supports
a "discard" mode where it pulls through and discards samples
without a decoder. This mode means that if the demo app were to
enable the renderer before supplying the surface, the renderer
could discard the first few frames prior to getting the surface,
meaning video rendering wouldn't happen until the following sync
frame.
To get a handle on complexity, I think we're better off just removing
support for this mode, which nicely decouples how the demo app
handles surfaces v.s. how it handles enabling/disabling renderers.
Propagate elapsedRealtimeUs to the video renderer. This allows
the renderer to calculate and adjust for the elapsed time since
the start of the current rendering loop. Typically this is <2ms,
but there situations where it can go higher (normally when the
video renderer ends up processing more than 1 output buffer in
a single loop).
Also made variable naming more consistent throughout the package.
- Use native frame release timing in video renderer for
smoother video playback.
- Avoid unnecessary memory copy steps in audio renderer.
- Use non-blocking AudioTrack API.
- Bring back requirement for the first video frame to be rendered
before isReady returns true, *unless* we've deduced that the
upstream source is serving multiple renderers.
- Ditto for requiring that the audio track has some buffered data.
- Make MediaCodecTrackRenderer.isReady more permissive.
This largely fixes#21
- Bring WebmExtractor closer to FragmentedMp4Extractor.
The two will probably be placed under a common interface
fairly soon, which will allow significant code
deduplication.
AudioTrack time will go out of sync if the decodeOnly flag
is set of arbitrary samples (as opposed to just those following
a seek). It's a pretty obscure case and it would be weird for
anyone to do it, but we should be robust against it anyway.