Some servers, probably edge cache machines that exclusively serve
chunked media, don't support partial requests. Which is kind of
vaguely reasonable for that particular case. This change modifies
DefaultHttpDataSource to correctly handle this case, by manually
skipping data up to the requested position (and making sure not to
read more data than the requested length).
- Have extractors read from an ExtractorInput. Benefits of this are
(a) The ability to do a "full" read or skip of a specified number
of bytes, (b) The ability to do multiple reads in your extractor's
read method. The ExtractorInput will throw an InterruptedException
if the read has been canceled.
- Provides the extractor with the ability to query the absolute
position of the data being read in the stream. This is needed for
things like parsing a segment index in fragmented mp4, where the
position of the end of the box in the stream is required because
the index offsets are all specified relative to that position.
Also remove uriIsFullStream. It's not doing anything particularly
useful, so I think it makes sense to remove it from the public API;
it's unlikely anyone is using it.
Issue: #329
Note: I'm fairly confident that NetworkLoadable.Parser implementations
can live without the inputEncoding being specified. But not completely
100%...
Issue: #311
Issue: #56
Use of Sample objects was inefficient for several reasons:
- Lots of objects (1 per sample, obviously).
- When switching up bitrates, there was a tendency for all Sample
instances to need to expand, which effectively led to our whole
media buffer being GC'd as each Sample discarded its byte[] to
obtain a larger one.
- When a keyframe was encountered, the Sample would typically need
to expand to accommodate it. Over time, this would lead to a
gradual increase in the population of Samples that were sized to
accommodate keyframes. These Sample instances were then typically
underutilized whenever recycled to hold a non-keyframe, leading
to inefficient memory usage.
This CL introduces RollingBuffer, which tightly packs pending sample
data into a byte[]s obtained from an underlying BufferPool. Which
fixes all of the above. There is still an issue where the total
memory allocation may grow when switching up bitrate, but we can
easily fix that from this point, if we choose to restrict the buffer
based on allocation size rather than time.
Issue: #278
1. Correctly replace the AES data source if IV changes.
2. Check the largest timestamp for being equal to MIN_VALUE, and
handle this case properly.
3. Clean up AES data source a little.
Issue: #162
- Move parsing onto a background thread. This is analogous
to how frame decoding is pushed to MediaCodec, and should
prevent possible jank when new subtitle samples are parsed.
This is more important for out-of-band subtitles, which can
take a second or two to parse fully.
- Add Useful DataSpec method.
This API wasn't particularly nice. Best to remove it whilst
hopefully no-one is using it. Leaving the ReadHead abstraction
in place, since it might well prove useful in the future.
- cache ref didn't work because it referred to a private variable
(which isn't documented) from a public interface definition
(which is). Meaning the Javadoc generator was trying to link
to documentation that didn't exist.
- Add constants class. Currently housing a single lonely variable,
which is used generally throughout the library, and so no longer
nicely fits into a specific class.
- Rename a few other constants to add clear units.
- Made minor tweak to ExoPlayer documentation.
- Add support for parsing avc3 boxes.
- Make workaround for signed sample offsets in trun files always enabled.
- Generalize remaining workaround into a flag, to make it easy to add additional workarounds going forward without changing the API.
- Fix DataSourceStream bug where read wouldn't return -1 having fully read segment whose spec length was unbounded.
1. Fix SimpleCache startReadWrite asymmetry. Allow more concurrency.
- startReadWrite does not have the concept of a read lock. Once
a cached span is returned, the caller can do whatever it likes
for as long as it wants to. This allows a read to be performed
in parallel with a write that starts after it.
- If there's an ongoing write, startReadWrite will block even if
the return operation will be a read. So there's a weird asymmetry
where reads can happen in parallel with writes, but only if the
reads were started first.
- This CL removes the asymmetry, by allowing a read to start even
if the write lock is held.
- Note that the reader needs to be prepared for the thing it's
reading to disappear, but this was already the case, and will
always be the case since the reader will need to handle disk
read failures anyway.
2. Add isCached method.