SwiftUI: Generate Sounds/Signals With AVAudioEngine



This content originally appeared on Level Up Coding – Medium and was authored by Itsuki

Let’s build a simple music notes player! Single & Chords!

Of course, you cannot hear the sound from my little gif!
So! Grab it from my GitHub and give it a try yourself!

AVAudioEngine is super powerful!

Not only just play some existing audio file! Not just only for processing audio input from the mic in real time!

We can use it to generate sounds/signals from scratch!

The idea itself is really simple (as we will see in couple seconds), but it does take some work to actually get it work!

Because!

The API provided is so C and full of mutable pointers!

The entire sound-rendering loop is so fragile that anything might lead to sound corruption!

And Blah!!!!

So!

In this article, let’s build a simple music notes player to play some notes from scratch, ie: without pre-adding any sound files, to check out how we can generate sounds with AVAudioEngine!

I assume that you do have a fairly good grasp on this AVAudioEngine. If you need a catch up, please give my previous article on AVAudioEngine With Swift Concurrency a check!

Let’s start!

Most Important Points

It is really, like REALLY important, so please let me point this out at beginning!

Try to avoid allocate memory, perform file I/O, take locks, or interact with the Swift or Objective-C runtimes when rendering audio!

That is instead of using a calculated property, store it!

Remove all those prints!

And blahhh!

Because even just that will lead to corrupted audio!

Try To Avoid Capturing Self or Updating Self while rendering!

This might lead to Objective-C member lookups, and in the worst case, lead to app crashes!

— — — — — — — — — —

If what I am saying here doesn’t make much sense, I promise (or hope) it will after this article!

Basic Idea

Just like for playing audio files, we are providing the audio data through an AVAudioPlayerNode, for recording, we are using the AVAudioInputNode to use the mic as an audio source, here, we have this AVAudioSourceNode! (I hope you are familiar with Audio Nodes, if not, please give my previous article on AVAudioEngine With Swift Concurrency a check!)

It allows us to supply audio data for rendering through a AVAudioSourceNodeRenderBlock.

typealias AVAudioSourceNodeRenderBlock = (
UnsafeMutablePointer<ObjCBool>,
UnsafePointer<AudioTimeStamp>,
AVAudioFrameCount,
UnsafeMutablePointer<AudioBufferList>
) -> OSStatus

(I know, so many pointers!)

For the returning value, it is just an OSStatus result code. If we have returned an error, the framework will consider the audio data invalid.

What we will have access to within this render block are

  • isSilence: A Boolean value that indicates whether the buffer contains only silence.
  • timestamp: The HAL time the audio data renders.
  • frameCount: The number(AVAudioFrameCount is just an UInt32)of sample frames of audio data the engine requests.
  • outputData: The output data.

The only ones we really cares about here are the frameCount and the outputData.

Basically, for each render, ie: each time that this AVAudioSourceNodeRenderBlock is called, frameCount will act like a tiny window of time range starting from the timestamp in which we will calculate the output data for. We will then use the outputData pointer to provide audio data for ALL frameCount!

We will get into a lot more details on how this can be achieved, but before that, what is the audio data we will be outputting?

Audio Data

First of all, I think we all agree that sounds are just waves!

For example, for the music note we will be playing in this article, it will just be a simple sine wave of a specific frequency. (I am sorry, I am not an artist nor musician, so I don’t see those notes as anything romantic!) Therefore, at any give time (or phase, or x, whatever you call it), the output audio data will simply be the y-value of the sine wave given by y = sin(τ)!

And of course, the exact same idea applies to any other waveforms!

Sawtooth, Square, Triangle!

Or it doesn’t have to be a waveform, ie: noises!

Music Note Player

Let’s hand on some code!

Set Up Audio Session

Since we will be using the AVAudioEngine , the first thing we want to do is to set up the AVAudioSession. Like always!

private let audioSession: AVAudioSession = AVAudioSession.sharedInstance()

private func configureAudioSession() throws {
try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [.duckOthers, .defaultToSpeaker, .allowBluetoothHFP])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
}

Configure Audio Engine

Next thing we have here is to create the AVAudioSourceNode, attach it AVAudioEngine, and connected with the mainMixerNode.

private let audioEngine = AVAudioEngine()

private func configureAudioEngine() {
let mixerNode = self.audioEngine.mainMixerNode
mixerNode.outputVolume = 0.5
let outputNode = self.audioEngine.outputNode

let outputFormat = outputNode.inputFormat(forBus: 0)
let inputFormat = AVAudioFormat(commonFormat: outputFormat.commonFormat, sampleRate: outputFormat.sampleRate, channels: 1, interleaved: outputFormat.isInterleaved)

let sourceNode = AVAudioSourceNode(renderBlock: self.renderBlock)

self.audioEngine.attach(sourceNode)
self.audioEngine.connect(sourceNode, to: mixerNode, format: inputFormat)

// main mixer node and output node is automatically connected.
// Therefore, we don't need the following.
// self.audioEngine.connect(mixerNode, to: outputNode, format: outputFormat)

self.audioEngine.prepare()
}


private func renderBlock(
isSilence: UnsafeMutablePointer<ObjCBool>,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AVAudioFrameCount,
outputData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
// Coming next!

}

I have set the outputVolume here just so that the output audio won’t get too loud. You can also change the channels of the inputFormat to a different number if you like.

🚨🚨 Render Block!

Til this point, we are pretty much doing the exact same thing as what we had in my previous AVAudioEngine With Swift Concurrency! Just swapping out the node that we will be providing the audio with!

Here it comes! The main dish for today!

Code first! We will then be diving in!

private func renderBlock(
isSilence: UnsafeMutablePointer<ObjCBool>,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AVAudioFrameCount,
outputData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {

// Try to avoid capturing "self" in the rendering loop, leading to Objective-C member lookups.
let player = self

// perform rendering
for frame in 0..<frameCount {
let value: Float = player.getNextSampleValue(...)

// Following will result in: Initialization of 'UnsafeMutableBufferPointer<AudioBuffer>' results in a dangling buffer pointer
// Therefore, we will need to wrap it with a withUnsafeMutablePointer
// let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: &outputData.pointee.mBuffers, count: Int(outputData.pointee.mNumberBuffers))

withUnsafeMutablePointer(to: &outputData.pointee.mBuffers, { (pointer: UnsafeMutablePointer<AudioBuffer>) in
// outputData.pointee.mBuffers: A variable-length array of audio buffers.
// normally 1 for each channel
let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: pointer, count: Int(outputData.pointee.mNumberBuffers))

// loop through the buffers
for buffer in buffers {
// update the data for a specific frame

buffer.mData?.storeBytes(of: value, toByteOffset: Int(frame) * MemoryLayout<Float>.size, as: Float.self)
}
})
}

// No error, ie: 0.
return noErr

}


private func getNextSampleValue(...) -> Float {
// ...
}

I have left out the getNextSampleValue because! It does NOT matter!

Okay, it does in terms of generating a music note (or notes), but it does NOT if we are talking about just generating some sounds (signals) with AVAudioSourceNode!

If you like, you can simply return a constant from it, for example, 1.0, start audio engine with start(), and hear some random sounds playing!

First of all, let’s take a look at this AudioBufferList data type of our outputData.

It is a structure that stores a variable-length array of audio buffers. Within this structure, we have two properties.

Now the question becomes how do we actually loop through that AudioBuffer?

Obviously we cannot do for buffer in mBuffers!

What we will have to do here is to create a new buffer pointer over mNumberBuffers of contiguous, beginning at the mBuffers’s pointer.

let buffers = UnsafeMutableBufferPointer<AudioBuffer>(
start: pointer,
count: Int(outputData.pointee.mNumberBuffers)
)

We can then use that for loop we are super used to!

Note that we are using the withUnsafeMutablePointer to access the mBuffers’s pointer here instead of simply setting &outputData.pointee.mBuffers for the start. This is so that we don’t end up having an Initialization of ‘UnsafeMutableBufferPointer<AudioBuffer>’ results in a dangling buffer pointer warning.

Now that we are able to access individual buffer we want to update, another problem comes up!

How do we update data per frame?

If we take a look at this AudioBuffer, the mData field is a SINGLE UnsafeMutableRawPointer to a buffer of audio data.

Therefore, to assign the audio data for a specific frame, we can use the storeBytes(of:toByteOffset:as:) function with the offset being the frame we are assigning the data to multiply by the size of the data we are trying to assign, in this case, that of a Float.

In the documentation, it mentions that we also need to update the mDataByteSize and mNumberChannels parameters of the AudioBuffer as we update the data field. However, I found this being unnecessary as those will update automatically as we assign data, at least in my case.

Now you might wonder why do I loop the frame outside instead of within the withUnsafeMutablePointer! Couple reasons!

First of all, the value is independent of the buffer but depending on the frame and we can avoid some calculation if we have multiple buffers, which is REALLY important in terms of rendering sounds with better quality!

Secondly, as I have mentioned, frameCount is like a tiny window of time range in which we will calculate the output data for, which means, of course, we should try to update the ones with earlier time earlier!

Get Value For Sample

Yes, it doesn’t matter as much to what we have above if we just want a sound, but we do have couple important points here if we want a GOOD sound!

Let me share with my set up here first!

private static let `2pi`: Float = 2 * Float.pi

// The sample rate is used to compute the phase increment when the generator frequency changes.
// A good sample rate for music is generally 44.1 kHz or 48 kHz
private static let sampleRate: Float = 44100

// amplitude of the sine wave
private static let amplitude: Float = 0.5


// to keep track of the phase (radian) of a specific frequency to get the next sample (value) on
// [frequency: Phase]
@ObservationIgnored
private var frequencyPhaseDict: [Float: Float] = [:]

// keep track of removed frequency so that we can ramp the amplitude to avoid audio artifact
// [frequency: (Phase, Amplitude)]
@ObservationIgnored
private var removedFrequencyMap: [Float: (Float, Float)] = [:]

// the amount to decrease the amplitude when a note is released
private let amplitudeRampIncrement: Float = MusicNotesPlayer.amplitude / ( MusicNotesPlayer.sampleRate * 0.1)

private func renderBlock(
isSilence: UnsafeMutablePointer<ObjCBool>,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AVAudioFrameCount,
outputData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
// Try to avoid capturing "self" in the rendering loop, leading to Objective-C member lookups.
// therefore, we create another reference here
let player = self
var frequencyPhaseDict = player.frequencyPhaseDict
var removedFrequencyMap = player.removedFrequencyMap
let amplitudeRampIncrement = player.amplitudeRampIncrement

// perform rendering
for frame in 0..<frameCount {
let value: Float = player.getNextSampleValue(&frequencyPhaseDict, &removedFrequencyMap, amplitudeRampIncrement: amplitudeRampIncrement)

// Following will result in: Initialization of 'UnsafeMutableBufferPointer<AudioBuffer>' results in a dangling buffer pointer.
// Therefore, we will need to wrap it with a withUnsafeMutablePointer
// let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: &outputData.pointee.mBuffers, count: Int(outputData.pointee.mNumberBuffers))

withUnsafeMutablePointer(to: &outputData.pointee.mBuffers, { (pointer: UnsafeMutablePointer<AudioBuffer>) in
// outputData.pointee.mBuffers: A variable-length array of audio buffers.
// normally 1 for each channel
let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: pointer, count: Int(outputData.pointee.mNumberBuffers))

// loop through the buffers
for buffer in buffers {
// update the data for a specific frame
buffer.mData?.storeBytes(of: value, toByteOffset: Int(frame) * MemoryLayout<Float>.size, as: Float.self)

}
})
}

// update self after rendering
//
// We need loop here instead of direct assignment because
// frequency might already be removed or new ones might be added that is not reflected in the capture ones
var newDeleted: Set<Float> = []
for (frequency, phase) in frequencyPhaseDict {
if player.frequencyPhaseDict.contains(where: {$0.key == frequency}) {
player.frequencyPhaseDict[frequency] = phase
} else {
newDeleted.insert(frequency)
}
}

for (frequency, value) in removedFrequencyMap {
if player.removedFrequencyMap.contains(where: {$0.key == frequency}) &&
// if it is newly added, we keep the new phase and amplitude
!newDeleted.contains(frequency) {
player.removedFrequencyMap[frequency] = value
}
}

// No error, ie: 0.
return noErr

}



// IMPORTANT:
// This function is called within the render block for rendering.
// Therefore, we should avoid updating `self` because
// capturing "self" in render, leading to Objective-C member lookups.
private func getNextSampleValue(_ _frequencyPhaseDict: inout [Float: Float], _ _removedFrequencyMap: inout [Float: (Float, Float)], amplitudeRampIncrement: Float) -> Float {
var finalValue: Float = 0.0

var frequencyPhaseDict = _frequencyPhaseDict
// loop with the original
for (frequency, phase) in _frequencyPhaseDict {
let value = sin(phase)
finalValue = finalValue + value

// update phase to get the next sample on
let phaseIncrement = frequency * MusicNotesPlayer.`2pi` / MusicNotesPlayer.sampleRate
let newPhase = phaseIncrement + phase
// wrap the phase to be within 2pi
frequencyPhaseDict[frequency] = newPhase >= MusicNotesPlayer.`2pi` ? newPhase - MusicNotesPlayer.`2pi` : newPhase
}

finalValue = finalValue * MusicNotesPlayer.amplitude


// dim out the removed frequencies to avoid audio artifact
var removedFrequencyMap = _removedFrequencyMap
// loop with the original
for (frequency, (phase, amplitude)) in _removedFrequencyMap {
let value = sin(phase)
finalValue = finalValue + value * amplitude

// update phase to get the next sample on
let phaseIncrement = frequency * MusicNotesPlayer.`2pi` / MusicNotesPlayer.sampleRate
let newPhase = phaseIncrement + phase
let newAmplitude = max(0, amplitude - amplitudeRampIncrement)
if newAmplitude <= 0 {
removedFrequencyMap[frequency] = nil
continue
}
// wrap the phase to be within 2pi
removedFrequencyMap[frequency] = (newPhase >= MusicNotesPlayer.`2pi` ? newPhase - MusicNotesPlayer.`2pi` : newPhase, newAmplitude)
}

// update after looping
_frequencyPhaseDict = frequencyPhaseDict
_removedFrequencyMap = removedFrequencyMap

return finalValue
}

Important Points!

  1. Try to avoid capturing self in the rendering loop
  2. Try to avoid performing any additional or less important calculations or updates while looping through the frame as it might lead to corrupted sounds! That’s why our amplitudeRampIncrement is not a calculated variable, and that’s why we are only updating the removedFrequencyMap and frequencyPhaseDict after the loop.

Less important but still important points!

  1. We are keeping a removedFrequencyMap so that we can ramp the amplitude gradually instead of just going straight from 0.5 to 0. This will help us preventing audio artifacts from happening. Depending on your use case, for example, if you are allowing user to adjust to frequency as the sound is playing, you might also want to consider implementing a ramp for the phase as well!

Other than that, what we have are just some simple calculations.

Final Code

Just wrapping up what we have above together with couple simple views so that we can play it really quick!

Again, if you are too lazy to copy & paste, you can also just grab it from my GitHub!

Manager


extension MusicNotesPlayer {
private static let `2pi`: Float = 2 * Float.pi

// The sample rate is used to compute the phase increment when the generator frequency changes.
// A good sample rate for music is generally 44.1 kHz or 48 kHz
private static let sampleRate: Float = 44100

// amplitude of the sine wave
private static let amplitude: Float = 0.5
}


@Observable
class MusicNotesPlayer {

// Frequency in Hz
var frequencies: Set<(Float)> = [] {
didSet {
updateDicts(oldFrequencies: oldValue, newFrequencies: self.frequencies)
}
}

// to keep track of the phase (radian) of a specific frequency to get the next sample (value) on
// [frequency: Phase]
@ObservationIgnored
private var frequencyPhaseDict: [Float: Float] = [:]

// keep track of removed frequency so that we can ramp the amplitude to avoid audio artifact
// [frequency: (Phase, Amplitude)]
@ObservationIgnored
private var removedFrequencyMap: [Float: (Float, Float)] = [:]


var error: Error? {
didSet {
if let error = self.error {
print(error)
}
}
}


// the amount to decrease the amplitude when a note is released
// For ramping (dimming the amplitude) when we remove a frequency to avoid audio artifact
// IMPORTANT: do NOT use calculated variable, ie: `get`.
// Because To ensure glitch-free performance, audio processing must occur in a real-time safe context,
// therefore, we should try our best to not allocate memory, perform file I/O, take locks, or interact with the Swift or Objective-C runtimes when rendering audio.
private let amplitudeRampIncrement: Float = MusicNotesPlayer.amplitude / ( MusicNotesPlayer.sampleRate * 0.1)

private let audioEngine = AVAudioEngine()
private let audioSession: AVAudioSession = AVAudioSession.sharedInstance()


init() {

do {
try self.configureAudioSession()
self.configureAudioEngine()
} catch(let error) {
self.error = error
}
}

deinit {
self.audioEngine.stop()
}

func start() {
do {
try self.audioEngine.start()
} catch(let error) {
self.error = error
}
}

func pause() {
self.audioEngine.pause()
}


private func configureAudioSession() throws {
try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [.duckOthers, .defaultToSpeaker, .allowBluetoothHFP])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
}

private func configureAudioEngine() {
let mixerNode = self.audioEngine.mainMixerNode
mixerNode.outputVolume = 0.8
let outputNode = self.audioEngine.outputNode

let outputFormat = outputNode.inputFormat(forBus: 0)
let inputFormat = AVAudioFormat(commonFormat: outputFormat.commonFormat, sampleRate: outputFormat.sampleRate, channels: 1, interleaved: outputFormat.isInterleaved)

let sourceNode = AVAudioSourceNode(renderBlock: self.renderBlock)

self.audioEngine.attach(sourceNode)
self.audioEngine.connect(sourceNode, to: mixerNode, format: inputFormat)

// main mixer node and output node is automatically connected.
// Therefore, we don't need the following.
// self.audioEngine.connect(mixerNode, to: outputNode, format: outputFormat)

self.audioEngine.prepare()
}



private func renderBlock(
isSilence: UnsafeMutablePointer<ObjCBool>,
timestamp: UnsafePointer<AudioTimeStamp>,
frameCount: AVAudioFrameCount,
outputData: UnsafeMutablePointer<AudioBufferList>
) -> OSStatus {
// A sample of render block
// isSilence.pointee: false
// timestamp.pointee: AudioTimeStamp(mSampleTime: 85317452.0, mHostTime: 3742289118028, mRateScalar: 1.0000020152698863, mWordClockTime: 0, mSMPTETime: __C.SMPTETime(mSubframes: 0, mSubframeDivisor: 0, mCounter: 0, mType: __C.SMPTETimeType, mFlags: __C.SMPTETimeFlags(rawValue: 0), mHours: 0, mMinutes: 0, mSeconds: 0, mFrames: 0), mFlags: __C.AudioTimeStampFlags(rawValue: 7), mReserved: 0)
// frameCount: 471
// outputData.pointee: AudioBufferList(mNumberBuffers: 1, mBuffers: __C.AudioBuffer(mNumberChannels: 1, mDataByteSize: 1884, mData: Optional(0x0000000103861a00)))

// Try to avoid capturing "self" in the rendering loop, leading to Objective-C member lookups.
// therefore, we create another reference here
let player = self
var frequencyPhaseDict = player.frequencyPhaseDict
var removedFrequencyMap = player.removedFrequencyMap
let amplitudeRampIncrement = player.amplitudeRampIncrement

// perform rendering
for frame in 0..<frameCount {
let value: Float = player.getNextSampleValue(&frequencyPhaseDict, &removedFrequencyMap, amplitudeRampIncrement: amplitudeRampIncrement)


// Following will result in: Initialization of 'UnsafeMutableBufferPointer<AudioBuffer>' results in a dangling buffer pointer.
// Therefore, we will need to wrap it with a withUnsafeMutablePointer
// let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: &outputData.pointee.mBuffers, count: Int(outputData.pointee.mNumberBuffers))

withUnsafeMutablePointer(to: &outputData.pointee.mBuffers, { (pointer: UnsafeMutablePointer<AudioBuffer>) in
// outputData.pointee.mBuffers: A variable-length array of audio buffers.
// normally 1 for each channel
let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: pointer, count: Int(outputData.pointee.mNumberBuffers))

// loop through the buffers
for buffer in buffers {
// update the data for a specific frame
buffer.mData?.storeBytes(of: value, toByteOffset: Int(frame) * MemoryLayout<Float>.size, as: Float.self)

}
})
}

// update self after rendering
//
// We need loop here instead of direct assignment because
// frequency might already be removed or new ones might be added that is not reflected in the capture ones
var newDeleted: Set<Float> = []
for (frequency, phase) in frequencyPhaseDict {
if player.frequencyPhaseDict.contains(where: {$0.key == frequency}) {
player.frequencyPhaseDict[frequency] = phase
} else {
newDeleted.insert(frequency)
}
}

for (frequency, value) in removedFrequencyMap {
if player.removedFrequencyMap.contains(where: {$0.key == frequency}) &&
// if it is newly added, we keep the new phase and amplitude
!newDeleted.contains(frequency) {
player.removedFrequencyMap[frequency] = value
}
}

// No error, ie: 0.
return noErr

}



// IMPORTANT:
// This function is called within the render block for rendering.
// Therefore, we should avoid updating `self` because
// capturing "self" in render, leading to Objective-C member lookups.
private func getNextSampleValue(_ _frequencyPhaseDict: inout [Float: Float], _ _removedFrequencyMap: inout [Float: (Float, Float)], amplitudeRampIncrement: Float) -> Float {
var finalValue: Float = 0.0

// make a copy to update
var frequencyPhaseDict = _frequencyPhaseDict

// loop with the original
for (frequency, phase) in _frequencyPhaseDict {
let value = sin(phase)
finalValue = finalValue + value

// update phase to get the next sample on
let phaseIncrement = frequency * MusicNotesPlayer.`2pi` / MusicNotesPlayer.sampleRate
let newPhase = phaseIncrement + phase
// wrap the phase to be within 2pi
frequencyPhaseDict[frequency] = newPhase >= MusicNotesPlayer.`2pi` ? newPhase - MusicNotesPlayer.`2pi` : newPhase
}

finalValue = finalValue * MusicNotesPlayer.amplitude


// dim out the removed frequencies to avoid audio artifact
// make a copy to update
var removedFrequencyMap = _removedFrequencyMap

// loop with the original
for (frequency, (phase, amplitude)) in _removedFrequencyMap {
let value = sin(phase)
finalValue = finalValue + value * amplitude

// update phase to get the next sample on
let phaseIncrement = frequency * MusicNotesPlayer.`2pi` / MusicNotesPlayer.sampleRate
let newPhase = phaseIncrement + phase
let newAmplitude = max(0, amplitude - amplitudeRampIncrement)
if newAmplitude <= 0 {
removedFrequencyMap[frequency] = nil
continue
}
// wrap the phase to be within 2pi
removedFrequencyMap[frequency] = (newPhase >= MusicNotesPlayer.`2pi` ? newPhase - MusicNotesPlayer.`2pi` : newPhase, newAmplitude)
}

// update after looping
_frequencyPhaseDict = frequencyPhaseDict
_removedFrequencyMap = removedFrequencyMap

return finalValue
}


private func updateDicts(oldFrequencies: Set<Float>, newFrequencies: Set<Float>) {
if oldFrequencies == newFrequencies {
return
}

var newDict = frequencyPhaseDict
var removedMap = removedFrequencyMap

// do not need to align initial phases
for new in newFrequencies {
removedMap[new] = nil
if oldFrequencies.contains(new) {
continue
}
newDict[new] = 0.0
}

let deletions = oldFrequencies.subtracting(newFrequencies)
for deletion in deletions {
guard let currentPhase = newDict[deletion] else {
continue
}
newDict[deletion] = nil
removedMap[deletion] = (currentPhase, MusicNotesPlayer.amplitude - self.amplitudeRampIncrement)
}

// update everything at once at last instead of within the loop
self.frequencyPhaseDict = newDict
self.removedFrequencyMap = removedMap

if self.frequencyPhaseDict.isEmpty && self.removedFrequencyMap.isEmpty {
self.pause()
} else if !self.audioEngine.isRunning {
self.start()
}
}


}

MusicNotes Model


enum MusicNotes: String, CaseIterable, Identifiable {
case C
case Db
case D
case Eb
case E
case F
case Gb
case G
case Ab
case A
case Bb
case B

var id: String {
return self.rawValue
}

var color: Color {
switch self {
case .Db, .Eb, .Gb, .Ab:
Color.black
default:
Color.white
}
}

// frequency at octave 0
private var frequency: Float {
switch self {

case .C:
16.35
case .Db:
17.32
case .D:
18.35
case .Eb:
19.45
case .E:
20.60
case .F:
21.83
case .Gb:
23.12
case .G:
24.50
case .Ab:
25.96
case .A:
27.50
case .Bb:
29.14
case .B:
30.87
}
}

func frequency(octave: UInt) -> Float {
return self.frequency * pow(2, Float(octave))
}
}

Views

struct ContentView: View {
@Environment(MusicNotesPlayer.self) private var player

private let octaves: [UInt] = Array(3..<6)

var body: some View {
NavigationStack {
Group {
if let error = player.error {
ContentUnavailableView("Player Not Available", systemImage: "music.note.slash", description: Text(String("\(error)")))
} else {
VStack(spacing: 32) {
ForEach(octaves, id: \.self) { octave in
MusicNotesOctaveView(octave: octave)
}
}
.padding(.vertical, 24)
.padding(.horizontal, 16)
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .top)
}

}
.background(.yellow.opacity(0.2))
.navigationTitle("Music Notes Player")

}

}
}


struct MusicNotesOctaveView: View {
@Environment(MusicNotesPlayer.self) private var player

var octave: UInt

private let notes: [MusicNotes] = MusicNotes.allCases

var body: some View {
VStack(alignment: .leading) {
Text(String("Octave \(octave)"))
.font(.headline)

HStack(spacing: 0) {
ForEach(notes) { note in
let frequency = note.frequency(octave: self.octave)
let isPressed = player.frequencies.contains(frequency)
Rectangle()
.fill(note.color.mix(with: .gray, by: isPressed ? 0.7 : 0.1))
.border(.gray, width: 0.5)
.overlay(alignment: .bottom, content: {
Text(note.rawValue)
.font(.headline)
.foregroundStyle(note.color)
.colorInvert()
})
.contentShape(Rectangle())
.onLongPressGesture(minimumDuration: 0, perform: {}, onPressingChanged: { isPressed in
if isPressed {
self.player.frequencies.insert(frequency)
} else {
self.player.frequencies.remove(frequency)
}
})
}
}
.frame(height: 114)

}
}
}

Thank you for reading!

That’s it for this article!

Again, feel free to grab it from my GitHub!

Also, If you want to generate other signal (waveform) types or want to perform some phase ramping, Apple has provided some sample code for it! Other than the fact that it is written in C and full of bridging headers, it is pretty nice!

Happy sounds generating!


SwiftUI: Generate Sounds/Signals With AVAudioEngine was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding – Medium and was authored by Itsuki