Compare commits

..

No commits in common. "main" and "0.2" have entirely different histories.
main ... 0.2

11 changed files with 276 additions and 429 deletions

View file

@ -1,153 +0,0 @@
# Changelog
## [Unreleased]
- Your change here.
[Unreleased]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.4.0...HEAD
## [0.4.0] - 2025-09-10
### Fixed
- Fixed building with Xcode 26 RC
[0.4.0]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.9...0.4.0
## [0.3.9] - 2025-05-25
### Fixed
- Fixed crash on iOS 17 by using a new task instead of assumeIsolated
[0.3.9]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.8...0.3.9
## [0.3.8] - 2025-04-04
### Fixed
- Fixed crash when cancelled while writing samples
- Fixed tests with Swift 6.1 on macOS
- Fixed tests in Xcode 16.4 on macOS 15.5
- Fixed warnings in tests in Xcode 16.3
### Changed
- Stopped relying on specific delay in cancellation test
- Updated readme for 0.3.8
[0.3.8]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.7...0.3.8
## [0.3.7] - 2025-01-19
### Fixed
- Simplified cancellation and fixed memory leak
[0.3.7]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.6...0.3.7
## [0.3.6] - 2025-01-19
### Fixed
- Attempted to fix possible retain cycle
[0.3.6]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.5...0.3.6
## [0.3.5] - 2025-01-19
### Fixed
- Improved cancellation response (potential memory leak issue)
### Removed
- Deleted dead code
### Changed
- Extracted BaseTests class for better test organization
[0.3.5]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.4...0.3.5
## [0.3.4] - 2024-11-08
### Fixed
- [#3](https://github.com/samsonjs/SJSAssetExportSession/pull/3): Fixed encoding stalling by interleaving audio and video samples - [@samsonjs](https://github.com/samsonjs).
### Changed
- Updated readme with additional documentation
[0.3.4]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.3...0.3.4
## [0.3.3] - 2024-10-19
### Changed
- Made AudioOutputSettings and VideoOutputSettings properties public
### Fixed
- Made tests work on iOS 18.0 and iOS 18.1
- Fixed progress test
### Removed
- Removed SampleWriter.duration property
[0.3.3]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.2...0.3.3
## [0.3.2] - 2024-10-19
### Fixed
- Fixed release builds by using makeStream for SampleWriter's progress
### Changed
- Updated example in readme to version 0.3.2
[0.3.2]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3.1...0.3.2
## [0.3.1] - 2024-10-19
### Fixed
- Removed unnecessary Task.yield() to fix intermittent hang
### Changed
- Improved code style and debuggability
- Updated version in readme to 0.3.1
[0.3.1]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.3...0.3.1
## [0.3] - 2024-10-18
### Added
- Made audio/video settings Hashable, Sendable, and Codable
### Changed
- Updated readme for version 0.3
- Fixed SwiftPM instructions in readme
[0.3]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.2...0.3
## [0.2] - 2024-10-04
### Fixed
- [#2](https://github.com/samsonjs/SJSAssetExportSession/pull/2): Fixed spatial audio handling by dropping spatial audio tracks to fix encoding iPhone 16 videos - [@samsonjs](https://github.com/samsonjs).
### Changed
- Code style improvements
- Updated version in readme's SPM example
[0.2]: https://github.com/samsonjs/SJSAssetExportSession/compare/0.1...0.2
## [0.1] - 2024-09-18
### Added
- Initial release as Swift Package
- Alternative to AVAssetExportSession with custom audio/video settings
- Builder pattern API for AudioOutputSettings and VideoOutputSettings
- Flexible raw dictionary API for maximum control
- Progress reporting via AsyncStream
- Support for iOS 17.0+, macOS 14.0+, and visionOS 1.3+
- Swift 6 strict concurrency support
- Comprehensive test suite with multiple video formats
### Changed
- Converted from Xcode project to Swift package
- Made yielding last progress value more reliable
- Set deployment targets to iOS 17, macOS 14, and visionOS 1.3
### Added
- Support for writing metadata on assets
- Documentation for most public API
- README and license files
[0.1]: https://github.com/samsonjs/SJSAssetExportSession/releases/tag/0.1

View file

@ -24,7 +24,6 @@ let package = Package(
.process("Resources/test-no-audio.mp4"),
.process("Resources/test-no-video.m4a"),
.process("Resources/test-spatial-audio.mov"),
.process("Resources/test-x264-1080p-h264-60fps.mp4"),
]
),
]

View file

@ -1,9 +1,5 @@
# SJSAssetExportSession
[![0 dependencies!](https://0dependencies.dev/0dependencies.svg)](https://0dependencies.dev)
[![](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fsamsonjs%2FSJSAssetExportSession%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/samsonjs/SJSAssetExportSession)
[![](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Fsamsonjs%2FSJSAssetExportSession%2Fbadge%3Ftype%3Dplatforms)](https://swiftpackageindex.com/samsonjs/SJSAssetExportSession)
## Overview
`SJSAssetExportSession` is an alternative to [`AVAssetExportSession`][AV] that lets you provide custom audio and video settings, without dropping down into the world of `AVAssetReader` and `AVAssetWriter`. It has similar capabilites to [SDAVAssetExportSession][SDAV] but the API is completely different, the code is written in Swift, and it's ready for the world of strict concurrency.
@ -31,14 +27,12 @@ When you're integrating this into an app with Xcode then go to your project's Pa
### Swift Package Manager (SPM)
When you're integrating this using SPM on its own then add this to the list of dependencies your Package.swift file:
When you're integrating this using SPM on its own then add this to your Package.swift file:
```swift
.package(url: "https://github.com/samsonjs/SJSAssetExportSession.git", .upToNextMajor(from: "0.4.0"))
.package(url: "https://github.com/samsonjs/SJSAssetExportSession.git", .upToNextMajor(from: "0.2"))
```
and then add `"SJSAssetExportSession"` to the list of dependencies in your target as well.
## Usage
There are two ways of exporting assets: one using dictionaries for audio and video settings just like with `SDAVAssetExportSession`, and the other using a builder-like API with data structures for commonly used settings.
@ -198,6 +192,6 @@ try await exporter.export(
## License
Copyright © 2024-2025 [Sami Samhuri](https://samhuri.net) <sami@samhuri.net>. Released under the terms of the [MIT License][MIT].
Copyright © 2024 Sami Samhuri, https://samhuri.net <sami@samhuri.net>. Released under the terms of the [MIT License][MIT].
[MIT]: https://sjs.mit-license.org

View file

@ -5,12 +5,12 @@
// Created by Sami Samhuri on 2024-07-07.
//
import AVFoundation
public import AVFoundation
/// A convenient API for constructing audio settings dictionaries.
///
/// Construct this by starting with ``AudioOutputSettings/default`` or ``AudioOutputSettings/format(_:)`` and then chain calls to further customize it, if desired, using ``channels(_:)``, and ``sampleRate(_:)``.
public struct AudioOutputSettings: Hashable, Sendable, Codable {
/// Construct this by starting with ``AudioOutputSettings/default`` or ``AudioOutputSettings/format(_:)`` and then chain calls to further customize it, if desired, using ``channels(_:)``, ``sampleRate(_:)``, and ``mix(_:)``.
public struct AudioOutputSettings {
/// Describes the output file format.
public enum Format {
/// Advanced Audio Codec. The audio format typically used for MPEG-4 audio.
@ -26,9 +26,10 @@ public struct AudioOutputSettings: Hashable, Sendable, Codable {
}
}
public let format: AudioFormatID
public let channels: Int
public let sampleRate: Int?
let format: AudioFormatID
let channels: Int
let sampleRate: Int?
let mix: AVAudioMix?
/// Specifies the AAC format with 2 channels at a 44.1 KHz sample rate.
public static var `default`: AudioOutputSettings {
@ -37,15 +38,19 @@ public struct AudioOutputSettings: Hashable, Sendable, Codable {
/// Specifies the given format with 2 channels.
public static func format(_ format: Format) -> AudioOutputSettings {
.init(format: format.formatID, channels: 2, sampleRate: nil)
.init(format: format.formatID, channels: 2, sampleRate: nil, mix: nil)
}
public func channels(_ channels: Int) -> AudioOutputSettings {
.init(format: format, channels: channels, sampleRate: sampleRate)
.init(format: format, channels: channels, sampleRate: sampleRate, mix: mix)
}
public func sampleRate(_ sampleRate: Int?) -> AudioOutputSettings {
.init(format: format, channels: channels, sampleRate: sampleRate)
.init(format: format, channels: channels, sampleRate: sampleRate, mix: mix)
}
public func mix(_ mix: sending AVAudioMix?) -> AudioOutputSettings {
.init(format: format, channels: channels, sampleRate: sampleRate, mix: mix)
}
public var settingsDictionary: [String: any Sendable] {

View file

@ -30,8 +30,6 @@ public final class ExportSession: Sendable {
- audio: Optional audio settings using ``AudioOutputSettings``. Defaults to ``AudioOutputSettings/default``.
- mix: An optional mix that can be used to manipulate the audio in some way.
- video: Video settings using ``VideoOutputSettings``.
- outputURL: The file `URL` where the exported video will be written.
@ -46,7 +44,6 @@ public final class ExportSession: Sendable {
metadata: sending [AVMetadataItem] = [],
timeRange: CMTimeRange? = nil,
audio: sending AudioOutputSettings = .default,
mix: sending AVAudioMix? = nil,
video: sending VideoOutputSettings,
to outputURL: URL,
as fileType: AVFileType
@ -57,7 +54,7 @@ public final class ExportSession: Sendable {
let sampleWriter = try await SampleWriter(
asset: asset,
audioOutputSettings: audio.settingsDictionary,
audioMix: mix,
audioMix: audio.mix,
videoOutputSettings: video.settingsDictionary,
videoComposition: videoComposition,
timeRange: timeRange,
@ -67,7 +64,7 @@ public final class ExportSession: Sendable {
fileType: fileType
)
Task { [progressContinuation] in
for await progress in sampleWriter.progressStream {
for await progress in await sampleWriter.progressStream {
progressContinuation.yield(progress)
}
}
@ -149,7 +146,7 @@ public final class ExportSession: Sendable {
fileType: fileType
)
Task { [progressContinuation] in
for await progress in sampleWriter.progressStream {
for await progress in await sampleWriter.progressStream {
progressContinuation.yield(progress)
}
}

View file

@ -27,26 +27,24 @@ actor SampleWriter {
queue.asUnownedSerialExecutor()
}
let progressStream: AsyncStream<Float>
private let progressContinuation: AsyncStream<Float>.Continuation
// MARK: Inputs
lazy var progressStream: AsyncStream<Float> = AsyncStream { continuation in
progressContinuation = continuation
}
private var progressContinuation: AsyncStream<Float>.Continuation?
private let audioOutputSettings: [String: any Sendable]
private let audioMix: AVAudioMix?
private let videoOutputSettings: [String: any Sendable]
private let videoComposition: AVVideoComposition?
private let reader: AVAssetReader
private let writer: AVAssetWriter
private let duration: CMTime
private let timeRange: CMTimeRange
// MARK: Internal state
private var reader: AVAssetReader?
private var writer: AVAssetWriter?
private var audioOutput: AVAssetReaderAudioMixOutput?
private var audioInput: AVAssetWriterInput?
private var videoOutput: AVAssetReaderVideoCompositionOutput?
private var videoInput: AVAssetWriterInput?
private var isCancelled = false
nonisolated init(
asset: sending AVAsset,
@ -62,28 +60,18 @@ actor SampleWriter {
) async throws {
precondition(!videoOutputSettings.isEmpty)
(progressStream, progressContinuation) = AsyncStream<Float>.makeStream()
let duration = if let timeRange {
timeRange.duration
} else {
try await asset.load(.duration)
}
let reader = try AVAssetReader(asset: asset)
if let timeRange {
reader.timeRange = timeRange
}
self.reader = reader
let writer = try AVAssetWriter(outputURL: outputURL, fileType: fileType)
writer.shouldOptimizeForNetworkUse = optimizeForNetworkUse
writer.metadata = metadata
self.writer = writer
self.audioOutputSettings = audioOutputSettings
self.audioMix = audioMix
self.videoOutputSettings = videoOutputSettings
self.videoComposition = videoComposition
self.timeRange = if let timeRange {
timeRange
} else {
try await CMTimeRange(start: .zero, duration: asset.load(.duration))
}
// Filter out disabled tracks to avoid problems encoding spatial audio. Ideally this would
// preserve track groups and make that all configurable.
@ -92,24 +80,7 @@ actor SampleWriter {
// Audio is optional so only validate output settings when it's applicable.
if !audioTracks.isEmpty {
try Self.validateAudio(outputSettings: audioOutputSettings, writer: writer)
let audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: nil)
audioOutput.alwaysCopiesSampleData = false
audioOutput.audioMix = audioMix
guard reader.canAdd(audioOutput) else {
throw Error.setupFailure(.cannotAddAudioOutput)
}
reader.add(audioOutput)
self.audioOutput = audioOutput
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioOutputSettings)
audioInput.expectsMediaDataInRealTime = false
guard writer.canAdd(audioInput) else {
throw Error.setupFailure(.cannotAddAudioInput)
}
writer.add(audioInput)
self.audioInput = audioInput
}
let videoTracks = try await asset.loadTracks(withMediaType: .video)
.filterAsync { try await $0.load(.isEnabled) }
guard !videoTracks.isEmpty else { throw Error.setupFailure(.videoTracksEmpty) }
@ -118,6 +89,92 @@ actor SampleWriter {
renderSize: videoComposition.renderSize,
settings: videoOutputSettings
)
self.audioOutputSettings = audioOutputSettings
self.audioMix = audioMix
self.videoOutputSettings = videoOutputSettings
self.videoComposition = videoComposition
self.reader = reader
self.writer = writer
self.duration = duration
self.timeRange = timeRange ?? CMTimeRange(start: .zero, duration: duration)
try await setUpAudio(audioTracks: audioTracks)
try await setUpVideo(videoTracks: videoTracks)
}
func writeSamples() async throws {
try Task.checkCancellation()
progressContinuation?.yield(0.0)
writer.startWriting()
writer.startSession(atSourceTime: timeRange.start)
reader.startReading()
try Task.checkCancellation()
await encodeAudioTracks()
try Task.checkCancellation()
await encodeVideoTracks()
try Task.checkCancellation()
guard reader.status != .cancelled && writer.status != .cancelled else {
throw CancellationError()
}
guard writer.status != .failed else {
reader.cancelReading()
throw Error.writeFailure(writer.error)
}
guard reader.status != .failed else {
writer.cancelWriting()
throw Error.readFailure(reader.error)
}
await withCheckedContinuation { continuation in
writer.finishWriting {
continuation.resume(returning: ())
}
}
progressContinuation?.yield(1.0)
// Make sure the last progress value is yielded before returning.
await Task.yield()
await withCheckedContinuation { continuation in
progressContinuation?.onTermination = { _ in
continuation.resume(returning: ())
}
progressContinuation?.finish()
}
}
// MARK: - Setup
private func setUpAudio(audioTracks: [AVAssetTrack]) throws {
guard !audioTracks.isEmpty else { return }
let audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: nil)
audioOutput.alwaysCopiesSampleData = false
audioOutput.audioMix = audioMix
guard reader.canAdd(audioOutput) else {
throw Error.setupFailure(.cannotAddAudioOutput)
}
reader.add(audioOutput)
self.audioOutput = audioOutput
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioOutputSettings)
audioInput.expectsMediaDataInRealTime = false
guard writer.canAdd(audioInput) else {
throw Error.setupFailure(.cannotAddAudioInput)
}
writer.add(audioInput)
self.audioInput = audioInput
}
private func setUpVideo(videoTracks: [AVAssetTrack]) throws {
precondition(!videoTracks.isEmpty, "Video tracks must be provided")
let videoOutput = AVAssetReaderVideoCompositionOutput(
videoTracks: videoTracks,
videoSettings: nil
@ -139,100 +196,83 @@ actor SampleWriter {
self.videoInput = videoInput
}
func writeSamples() async throws {
guard let reader, let writer else { throw CancellationError() }
try Task.checkCancellation()
// Clear all of these properties otherwise when we get cancelled then we leak a bunch of
// pixel buffers.
defer {
if Task.isCancelled {
reader.cancelReading()
writer.cancelWriting()
}
self.reader = nil
self.writer = nil
audioInput = nil
audioOutput = nil
videoInput = nil
videoOutput = nil
}
progressContinuation.yield(0.0)
writer.startWriting()
writer.startSession(atSourceTime: timeRange.start)
reader.startReading()
try Task.checkCancellation()
startEncodingAudioTracks()
startEncodingVideoTracks()
while reader.status == .reading, writer.status == .writing {
try await Task.sleep(for: .milliseconds(10))
}
guard writer.status != .failed else {
reader.cancelReading()
throw Error.writeFailure(writer.error)
}
guard reader.status != .failed else {
writer.cancelWriting()
throw Error.readFailure(reader.error)
}
await withCheckedContinuation { continuation in
writer.finishWriting {
continuation.resume(returning: ())
}
}
progressContinuation.yield(1.0)
// Make sure the last progress value is yielded before returning.
await withCheckedContinuation { continuation in
progressContinuation.onTermination = { _ in
continuation.resume(returning: ())
}
progressContinuation.finish()
}
func cancel() async {
isCancelled = true
}
// MARK: - Encoding
private func startEncodingAudioTracks() {
private func encodeAudioTracks() async {
// Don't do anything when we have no audio to encode.
guard let audioInput, audioOutput != nil else {
return
}
guard audioInput != nil, audioOutput != nil else { return }
audioInput.requestMediaDataWhenReady(on: queue) {
Task { await self.writeAllReadySamples() }
return await withTaskCancellationHandler {
await withCheckedContinuation { continuation in
self.audioInput!.requestMediaDataWhenReady(on: queue) {
self.assumeIsolated { _self in
guard !_self.isCancelled else {
log.debug("Cancelled while encoding audio")
_self.reader.cancelReading()
_self.writer.cancelWriting()
continuation.resume()
return
}
let hasMoreSamples = _self.writeReadySamples(
output: _self.audioOutput!,
input: _self.audioInput!
)
if !hasMoreSamples {
log.debug("Finished encoding audio")
continuation.resume()
}
}
}
}
} onCancel: {
log.debug("Task cancelled while encoding audio")
Task {
await self.cancel()
}
}
}
private func startEncodingVideoTracks() {
videoInput?.requestMediaDataWhenReady(on: queue) {
Task { await self.writeAllReadySamples() }
}
}
private func encodeVideoTracks() async {
return await withTaskCancellationHandler {
await withCheckedContinuation { continuation in
self.videoInput!.requestMediaDataWhenReady(on: queue) {
#warning("FIXME: why is this broken on macOS?!")
self.assumeIsolated { _self in
guard !_self.isCancelled else {
log.debug("Cancelled while encoding video")
_self.reader.cancelReading()
_self.writer.cancelWriting()
continuation.resume()
return
}
private func writeAllReadySamples() {
if let audioInput, let audioOutput {
let hasMoreAudio = writeReadySamples(output: audioOutput, input: audioInput)
if !hasMoreAudio { log.debug("Finished encoding audio") }
}
if let videoInput, let videoOutput {
let hasMoreVideo = writeReadySamples(output: videoOutput, input: videoInput)
if !hasMoreVideo { log.debug("Finished encoding video") }
let hasMoreSamples = _self.writeReadySamples(
output: _self.videoOutput!,
input: _self.videoInput!
)
if !hasMoreSamples {
log.debug("Finished encoding video")
continuation.resume()
}
}
}
}
} onCancel: {
log.debug("Task cancelled while encoding video")
Task {
await self.cancel()
}
}
}
private func writeReadySamples(output: AVAssetReaderOutput, input: AVAssetWriterInput) -> Bool {
while input.isReadyForMoreMediaData {
guard reader?.status == .reading && writer?.status == .writing,
guard reader.status == .reading && writer.status == .writing,
let sampleBuffer = output.copyNextSampleBuffer() else {
input.markAsFinished()
return false
@ -240,16 +280,15 @@ actor SampleWriter {
// Only yield progress values for video. Audio is insignificant in comparison.
if output == videoOutput {
let endTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
let samplePresentationTime = endTime - timeRange.start
let progress = Float(samplePresentationTime.seconds / timeRange.duration.seconds)
progressContinuation.yield(progress)
let samplePresentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) - timeRange.start
let progress = Float(samplePresentationTime.seconds / duration.seconds)
progressContinuation?.yield(progress)
}
guard input.append(sampleBuffer) else {
log.error("""
Failed to append sample buffer \(String(describing: sampleBuffer)) to input
\(input.debugDescription)
Failed to append audio sample buffer \(String(describing: sampleBuffer)) to
input \(input.debugDescription)
""")
return false
}
@ -291,7 +330,7 @@ actor SampleWriter {
let renderWidth = Int(renderSize.width)
let renderHeight = Int(renderSize.height)
if renderWidth != settingsWidth || renderHeight != settingsHeight {
log.warning("Video composition's render size (\(renderWidth)\(renderHeight)) will be overridden by video output settings (\(settingsWidth)\(settingsHeight))")
log.warning("Video composition's render size (\(renderWidth)\(renderHeight)) will be overriden by video output settings (\(settingsWidth)\(settingsHeight))")
}
}
}

View file

@ -12,9 +12,9 @@ import AVFoundation
/// Construct this by starting with ``VideoOutputSettings/codec(_:size:)`` or ``VideoOutputSettings/codec(_:width:height:)`` and then chaining calls to further customize it, if desired, using ``fps(_:)``, ``bitrate(_:)``, and ``color(_:)``.
///
/// Setting the fps and colour also needs support from the `AVVideoComposition` and these settings can be applied to them with ``VideoOutputSettings/apply(to:)``.
public struct VideoOutputSettings: Hashable, Sendable, Codable {
public struct VideoOutputSettings {
/// Describes an H.264 encoding profile.
public enum H264Profile: Hashable, Sendable, Codable {
public enum H264Profile {
case baselineAuto, baseline30, baseline31, baseline41
case mainAuto, main31, main32, main41
case highAuto, high40, high41
@ -37,7 +37,7 @@ public struct VideoOutputSettings: Hashable, Sendable, Codable {
}
/// Specifies the output codec.
public enum Codec: Hashable, Sendable, Codable {
public enum Codec {
/// H.264 using the associated encoding profile.
case h264(H264Profile)
/// HEVC / H.265
@ -64,7 +64,7 @@ public struct VideoOutputSettings: Hashable, Sendable, Codable {
}
/// Specifies whether to use Standard Dynamic Range or High Dynamic Range colours.
public enum Color: Hashable, Sendable, Codable {
public enum Color {
/// Standard dynamic range colours (BT.709 which roughly corresponds to SRGB)
case sdr
/// High dynamic range colours (BT.2020)
@ -88,11 +88,11 @@ public struct VideoOutputSettings: Hashable, Sendable, Codable {
}
}
public let codec: Codec
public let size: CGSize
public let fps: Int?
public let bitrate: Int?
public let color: Color?
let codec: Codec
let size: CGSize
let fps: Int?
let bitrate: Int?
let color: Color?
public static func codec(_ codec: Codec, size: CGSize) -> VideoOutputSettings {
.init(codec: codec, size: size, fps: nil, bitrate: nil, color: nil)

View file

@ -0,0 +1,14 @@
//
// AVAsset+sending.swift
// SJSAssetExportSessionTests
//
// Created by Sami Samhuri on 2024-07-07.
//
import AVFoundation
extension AVAsset {
func sendTracks(withMediaType mediaType: AVMediaType) async throws -> sending [AVAssetTrack] {
try await loadTracks(withMediaType: mediaType)
}
}

View file

@ -1,50 +0,0 @@
//
// BaseTests.swift
// SJSAssetExportSession
//
// Created by Sami Samhuri on 2025-01-19.
//
import AVFoundation
import Foundation
import Testing
class BaseTests {
func resourceURL(named name: String) -> URL {
Bundle.module.resourceURL!.appending(component: name)
}
func makeAsset(url: URL) -> sending AVAsset {
AVURLAsset(url: url, options: [
AVURLAssetPreferPreciseDurationAndTimingKey: true,
])
}
func makeTemporaryURL(function: String = #function) -> AutoDestructingURL {
let timestamp = Int(Date.now.timeIntervalSince1970)
let f = function.replacing(/[\(\)]/, with: { _ in "" })
let filename = "\(Self.self)_\(f)_\(timestamp).mp4"
let url = URL.temporaryDirectory.appending(component: filename)
return AutoDestructingURL(url: url)
}
func makeVideoComposition(
assetURL: URL,
size: CGSize? = nil,
fps: Int? = nil
) async throws -> sending AVMutableVideoComposition {
let asset = makeAsset(url: assetURL)
let videoComposition = try await AVMutableVideoComposition.videoComposition(
withPropertiesOf: asset
)
if let size {
videoComposition.renderSize = size
}
if let fps {
let seconds = 1.0 / TimeInterval(fps)
videoComposition.sourceTrackIDForFrameTiming = kCMPersistentTrackID_Invalid
videoComposition.frameDuration = CMTime(seconds: seconds, preferredTimescale: 600)
}
return videoComposition
}
}

View file

@ -10,7 +10,45 @@ import CoreLocation
import SJSAssetExportSession
import Testing
final class ExportSessionTests: BaseTests {
final class ExportSessionTests {
private func resourceURL(named name: String) -> URL {
Bundle.module.resourceURL!.appending(component: name)
}
private func makeAsset(url: URL) -> sending AVAsset {
AVURLAsset(url: url, options: [
AVURLAssetPreferPreciseDurationAndTimingKey: true,
])
}
private func makeTemporaryURL(function: String = #function) -> AutoDestructingURL {
let timestamp = Int(Date.now.timeIntervalSince1970)
let f = function.replacing(/[\(\)]/, with: { _ in "" })
let filename = "\(Self.self)_\(f)_\(timestamp).mp4"
let url = URL.temporaryDirectory.appending(component: filename)
return AutoDestructingURL(url: url)
}
private func makeVideoComposition(
assetURL: URL,
size: CGSize? = nil,
fps: Int? = nil
) async throws -> sending AVMutableVideoComposition {
let asset = makeAsset(url: assetURL)
let videoComposition = try await AVMutableVideoComposition.videoComposition(
withPropertiesOf: asset
)
if let size {
videoComposition.renderSize = size
}
if let fps {
let seconds = 1.0 / TimeInterval(fps)
videoComposition.sourceTrackIDForFrameTiming = kCMPersistentTrackID_Invalid
videoComposition.frameDuration = CMTime(seconds: seconds, preferredTimescale: 600)
}
return videoComposition
}
@Test func test_sugary_export_720p_h264_24fps() async throws {
let sourceURL = resourceURL(named: "test-4k-hdr-hevc-30fps.mov")
let destinationURL = makeTemporaryURL()
@ -30,20 +68,19 @@ final class ExportSessionTests: BaseTests {
let exportedAsset = AVURLAsset(url: destinationURL.url)
#expect(try await exportedAsset.load(.duration) == .seconds(1))
// Audio
try #require(try await exportedAsset.loadTracks(withMediaType: .audio).count == 1)
let audioTrack = try #require(await exportedAsset.loadTracks(withMediaType: .audio).first)
try #require(try await exportedAsset.sendTracks(withMediaType: .audio).count == 1)
let audioTrack = try #require(await exportedAsset.sendTracks(withMediaType: .audio).first)
let audioFormat = try #require(await audioTrack.load(.formatDescriptions).first)
#expect(audioFormat.mediaType == .audio)
#expect(audioFormat.mediaSubType == .mpeg4AAC)
#expect(audioFormat.audioChannelLayout?.numberOfChannels == 2)
#expect(audioFormat.audioStreamBasicDescription?.mSampleRate == 44_100)
// Video
try #require(await exportedAsset.loadTracks(withMediaType: .video).count == 1)
let videoTrack = try #require(await exportedAsset.loadTracks(withMediaType: .video).first)
try #require(await exportedAsset.sendTracks(withMediaType: .video).count == 1)
let videoTrack = try #require(await exportedAsset.sendTracks(withMediaType: .video).first)
#expect(try await videoTrack.load(.naturalSize) == CGSize(width: 1280, height: 720))
#expect(try await videoTrack.load(.nominalFrameRate) == 24.0)
let dataRate = try await videoTrack.load(.estimatedDataRate)
#expect((900_000 ... 1_130_000).contains(dataRate))
#expect(try await videoTrack.load(.estimatedDataRate) == 1_036_128)
let videoFormat = try #require(await videoTrack.load(.formatDescriptions).first)
#expect(videoFormat.mediaType == .video)
#expect(videoFormat.mediaSubType == .h264)
@ -79,20 +116,19 @@ final class ExportSessionTests: BaseTests {
let exportedAsset = AVURLAsset(url: destinationURL.url)
#expect(try await exportedAsset.load(.duration) == .seconds(1))
// Audio
try #require(try await exportedAsset.loadTracks(withMediaType: .audio).count == 1)
let audioTrack = try #require(await exportedAsset.loadTracks(withMediaType: .audio).first)
try #require(try await exportedAsset.sendTracks(withMediaType: .audio).count == 1)
let audioTrack = try #require(await exportedAsset.sendTracks(withMediaType: .audio).first)
let audioFormat = try #require(await audioTrack.load(.formatDescriptions).first)
#expect(audioFormat.mediaType == .audio)
#expect(audioFormat.mediaSubType == .mpeg4AAC)
#expect(audioFormat.audioChannelLayout?.numberOfChannels == 2)
#expect(audioFormat.audioStreamBasicDescription?.mSampleRate == 44_100)
// Video
try #require(await exportedAsset.loadTracks(withMediaType: .video).count == 1)
let videoTrack = try #require(await exportedAsset.loadTracks(withMediaType: .video).first)
try #require(await exportedAsset.sendTracks(withMediaType: .video).count == 1)
let videoTrack = try #require(await exportedAsset.sendTracks(withMediaType: .video).first)
#expect(try await videoTrack.load(.naturalSize) == CGSize(width: 1280, height: 720))
#expect(try await videoTrack.load(.nominalFrameRate) == 24.0)
let dataRate = try await videoTrack.load(.estimatedDataRate)
#expect((900_000 ... 1_130_000).contains(dataRate))
#expect(try await videoTrack.load(.estimatedDataRate) == 1_036_128)
let videoFormat = try #require(await videoTrack.load(.formatDescriptions).first)
#expect(videoFormat.mediaType == .video)
#expect(videoFormat.mediaSubType == .h264)
@ -101,7 +137,7 @@ final class ExportSessionTests: BaseTests {
#expect(videoFormat.extensions[.yCbCrMatrix] == .yCbCrMatrix(.itu_R_709_2))
}
@Test func test_export_default_time_range() async throws {
@Test func test_export_default_timerange() async throws {
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let destinationURL = makeTemporaryURL()
@ -154,36 +190,6 @@ final class ExportSessionTests: BaseTests {
#expect(try await exportedTrack.load(.naturalSize) == CGSize(width: 1280, height: 720))
}
@Test func test_export_x264_60fps() async throws {
let sourceURL = resourceURL(named: "test-x264-1080p-h264-60fps.mp4")
let destinationURL = makeTemporaryURL()
let subject = ExportSession()
try await subject.export(
asset: makeAsset(url: sourceURL),
video: .codec(.h264, width: 1920, height: 1080)
.bitrate(2_500_000)
.fps(30),
to: destinationURL.url,
as: .mp4
)
let exportedAsset = AVURLAsset(url: destinationURL.url)
let videoTrack = try #require(await exportedAsset.loadTracks(withMediaType: .video).first)
let naturalSize = try await videoTrack.load(.naturalSize)
#expect(naturalSize == CGSize(width: 1920, height: 1080))
let fps = try await videoTrack.load(.nominalFrameRate)
#expect(Int(fps.rounded()) == 30)
let dataRate = try await videoTrack.load(.estimatedDataRate)
#expect((2_400_000 ... 2_700_000).contains(dataRate))
let videoFormat = try #require(await videoTrack.load(.formatDescriptions).first)
#expect(videoFormat.mediaType == .video)
#expect(videoFormat.mediaSubType == .h264)
#expect(videoFormat.extensions[.colorPrimaries] == .colorPrimaries(.itu_R_709_2))
#expect(videoFormat.extensions[.transferFunction] == .transferFunction(.itu_R_709_2))
#expect(videoFormat.extensions[.yCbCrMatrix] == .yCbCrMatrix(.itu_R_709_2))
}
@Test func test_export_progress() async throws {
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let progressValues = SendableWrapper<[Float]>([])
@ -201,8 +207,6 @@ final class ExportSessionTests: BaseTests {
as: .mov
)
// Wait for last progress value to be yielded.
try await Task.sleep(for: .milliseconds(10))
#expect(progressValues.value.count > 2, "There should be intermediate progress updates")
#expect(progressValues.value.first == 0.0)
#expect(progressValues.value.last == 1.0)
@ -226,17 +230,17 @@ final class ExportSessionTests: BaseTests {
@Test func test_export_throws_with_empty_audio_settings() async throws {
try await #require(throws: ExportSession.Error.setupFailure(.audioSettingsEmpty)) {
let sourceURL = self.resourceURL(named: "test-720p-h264-24fps.mov")
let videoComposition = try await self.makeVideoComposition(assetURL: sourceURL)
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let videoComposition = try await makeVideoComposition(assetURL: sourceURL)
let subject = ExportSession()
try await subject.export(
asset: self.makeAsset(url: sourceURL),
asset: makeAsset(url: sourceURL),
audioOutputSettings: [:], // Here it matters because there's an audio track
videoOutputSettings: VideoOutputSettings
.codec(.h264, size: videoComposition.renderSize).settingsDictionary,
composition: videoComposition,
to: self.makeTemporaryURL().url,
to: makeTemporaryURL().url,
as: .mov
)
}
@ -244,18 +248,18 @@ final class ExportSessionTests: BaseTests {
@Test func test_export_throws_with_invalid_audio_settings() async throws {
try await #require(throws: ExportSession.Error.setupFailure(.audioSettingsInvalid)) {
let sourceURL = self.resourceURL(named: "test-720p-h264-24fps.mov")
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let subject = ExportSession()
try await subject.export(
asset: self.makeAsset(url: sourceURL),
asset: makeAsset(url: sourceURL),
audioOutputSettings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: NSNumber(value: -1), // invalid number of channels
],
videoOutputSettings: VideoOutputSettings
.codec(.h264, size: CGSize(width: 1280, height: 720)).settingsDictionary,
to: self.makeTemporaryURL().url,
to: makeTemporaryURL().url,
as: .mov
)
}
@ -263,12 +267,12 @@ final class ExportSessionTests: BaseTests {
@Test func test_export_throws_with_invalid_video_settings() async throws {
try await #require(throws: ExportSession.Error.setupFailure(.videoSettingsInvalid)) {
let sourceURL = self.resourceURL(named: "test-720p-h264-24fps.mov")
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let size = CGSize(width: 1280, height: 720)
let subject = ExportSession()
try await subject.export(
asset: self.makeAsset(url: sourceURL),
asset: makeAsset(url: sourceURL),
audioOutputSettings: AudioOutputSettings.default.settingsDictionary,
videoOutputSettings: [
// missing codec
@ -276,7 +280,7 @@ final class ExportSessionTests: BaseTests {
AVVideoHeightKey: NSNumber(value: Int(size.height)),
],
composition: nil,
to: self.makeTemporaryURL().url,
to: makeTemporaryURL().url,
as: .mov
)
}
@ -284,12 +288,12 @@ final class ExportSessionTests: BaseTests {
@Test func test_export_throws_with_no_video_track() async throws {
try await #require(throws: ExportSession.Error.setupFailure(.videoTracksEmpty)) {
let sourceURL = self.resourceURL(named: "test-no-video.m4a")
let sourceURL = resourceURL(named: "test-no-video.m4a")
let subject = ExportSession()
try await subject.export(
asset: self.makeAsset(url: sourceURL),
asset: makeAsset(url: sourceURL),
video: .codec(.h264, width: 1280, height: 720),
to: self.makeTemporaryURL().url,
to: makeTemporaryURL().url,
as: .mov
)
}
@ -298,11 +302,11 @@ final class ExportSessionTests: BaseTests {
@Test func test_export_cancellation() async throws {
let sourceURL = resourceURL(named: "test-720p-h264-24fps.mov")
let destinationURL💥 = makeTemporaryURL()
let subject = ExportSession()
let task = Task {
let sourceAsset = AVURLAsset(url: sourceURL, options: [
AVURLAssetPreferPreciseDurationAndTimingKey: true,
])
let subject = ExportSession()
try await subject.export(
asset: sourceAsset,
video: .codec(.h264, width: 1280, height: 720),
@ -311,10 +315,8 @@ final class ExportSessionTests: BaseTests {
)
Issue.record("Task should be cancelled long before we get here")
}
NSLog("Waiting for encoding to begin...")
for await progress in subject.progressStream where progress > 0 {
break
}
NSLog("Sleeping for 0.3s")
try await Task.sleep(for: .milliseconds(300))
NSLog("Cancelling task")
task.cancel()
try? await task.value // Wait for task to complete