Optimize Your App’s Speed and Efficiency: Q&A
Learn depth of optimization from Apple Team
As you might now from my previous post — iOS 26: Foundation Model Framework - Code-Along Q&A — Apple is experimenting with new educational formats: webinars, 1-on-1 labs with engineers, and even hybrid sessions. Recently, this exact type of event was hosted in Cupertino and was also available online, where participants could once again ask questions and get responses directly from Apple engineers.
Today, I’m (continuing the format I started earlier) sharing questions from the “Optimize your app’s speed and efficiency” session where online part consisted of:
LiquidGlass containers: performance and usage
Battery performance examples
A deep dive into Foundation Model Framework Instruments tools
SwiftUI Performance
Guest Knowledge: Snap - shared how they organized performance tools
Disclaimer: At the time of publishing, the session video is only available on YouTube— similar to how it was with the Foundation Models session.
Everything is sorted, split into sections, and grammar-checked as usual.
Without further ado — the Q&A below!
General Usage 🦾
Is there a good pattern to pass an action closure to a view while minimizing impact, given that closures are hard to compare? Is there a more performant alternative?
Try to capture as little as possible in closures—for example, by not relying on implicit captures (which usually capture self and therefore depend on the whole view value) and instead capturing only the properties of the view value that you actually need in the closure.
How should I think about a skipped update? Are we saying that frames were skipped?
No, it means that the view’s value (that is, all stored properties of a view) was equal to the previous view value and therefore the view’s body wasn’t evaluated.
How does SwiftUI Canvas performance scale under visionOS multi-window contexts?
Canvas’s performance scaling shouldn’t be affected by using it in a visionOS multi-window context!
How do you recommend keeping an Observable model object in sync with a backing store, like a database? Should I use a private backing variable with a didSet to propagate bindings to the database, or is it better to write a custom observable object without the macro?
Should you choose to do this yourself, I would strongly encourage you to aggregate changes together at a greater scale than just a property change before propagating them to a database. In general, reacting synchronously to individual property changes one at a time is not good for performance.
If injecting an Observable view model into a view, should it be stored as @State?
You don’t need to store it as
@Stateand it can just be stored in aletorvar.
Would using a Timer to update my SwiftUI view be costly in terms of performance? For example, if I want to show the current time in hours, minutes, and seconds, but also have other views that depend on how long the timer has been running?
Yeah, this is fine! If you don’t need other events to happen in sync with the timer updating, we’d recommend using a date-relative text, but if you need multiple UI elements to be in sync with a timer, there’s nothing wrong with doing that. As host has emphasized though, be sure that you’re only causing updates for views which actually need to change with the timer!
When does it make sense (if ever) to use @Binding in a child view to improve performance versus using let? Assume the child view does not update the value of the property passed in by the parent.
You should prefer using a
letif you don’t need to write back to the binding. In most cases reading a binding is equivalent to just passing the value directly, but in certain situations (such as if the binding is not generated directly from a@State), bindings can add additional overhead.
By default, do ScrollViews have transparent backgrounds, even though they might appear white or black depending on display mode?
Thanks for the great question! Yes, by default SwiftUI ScrollViews have a transparent background. Certain other scrollable views may have additional backgrounds—for example, List.
Why is “View Debugging → Rendering” disabled in Xcode? Does it only work on an actual device and not in a simulator?
I recommend filing a feedback to request an enhancement to this functionality to provide support for Simulator Run Destinations.
Is there a way to export text-based logs from Instruments and the SwiftUI Template for the various drill-downs (cause & effect) that I see in the UI? I want this for feeding into AI chat sessions. I’ve had some success using Copy / Copy All with AI, but I’d like a more robust workflow.
In addition to copying the data out of the detail views in the Instruments window, you can go to View → Show Recorded Data and find the tables starting with “swiftui-” to access the raw recorded data. And finally, you can use the xctrace export command on a recorded trace file to export data in an XML format. If that doesn’t fit your needs, please use Feedback Assistant and explain what you’re looking for so we can take a look.
Is there any guidance for understanding battery usage in more detail than the audio/networking/processing/display/other breakdown in Xcode Organizer? For example, how can I determine what specific networking behavior is involved?
Thanks for the question! If you’re looking to understand field data, I’d recommend using MetricKit:
MXNetworkTransferMetricfor networking and respective other metrics for CPU, Display, GPU, Location, and others. For profiling at-desk, you can use Xcode gauges or Instruments templates. New in Instruments 26, Power Profiler can show you a subsystem-level breakdown of your application’s power usage. Please tune in for the currently running presentation that describes this tool in more detail or watch the WWDC25 session “Profile and optimize power usage in your app.”
Are there any special techniques for improving the performance of extensions, such as system extensions on macOS?
The best tool for profiling system extensions is Instruments. Tools like Time Profiler can help you understand where time is being spent. To let Instruments attach to your extensions, make sure the debugger can attach to it. Then, configure Instruments to target your local device and use the Attach option in the target chooser. Some signing tips and tricks can be found in Apple’s documentation:. For a primer on CPU profiling, watch “Optimize CPU Performance with Instruments”.
In our app, a Combine Published object updates the UI. We can use either receive(on: RunLoop.main) or receive(on: DispatchQueue.main), and both seem to work. Is there a recommended choice between the two?
Both options will schedule work to be completed on the main thread and allow you to update your UI. The decision depends on the exact details. Using
DispatchQueue.mainwill result in your work executing on the main thread as soon as possible. UsingRunLoop.mainwill schedule work onto theRunLoopand can result in delays to your UI updates. Consider a scenario where you are scrolling—updating your UI frequently while scrolling can degrade performance. In this case, scheduling onto the RunLoop could result in smoother scrolling. However, if you need the UI to update as quickly as possible, scheduling ontoDispatchQueue.mainis the best choice.
Liquid Glass 💧
Should we embed ScrollViews or Lists in GlassEffectContainers if the items use Liquid Glass?
GlassEffectContainer should be applied to conceptually grouped UI elements, as the paths for the glass of each element can blend together. The contents of a scroll view or list are almost always not all part of the same conceptual group of elements, so GlassEffectContainer shouldn’t wrap your list or scroll view.
Are there additional considerations for improving performance on visionOS? Will you share those today?
Today we will not be presenting visionOS-specific performance considerations. The optimizations shared by the presenter for Liquid Glass are applicable across all platforms including visionOS
Would it be beneficial to wrap toolbar buttons in a GlassEffectContainer, or is the toolbar optimized by default?
Controls in a SwiftUI toolbar will be correctly handled for you behind the scenes! If you’re placing the controls yourself using other layout primitives, or something like safeAreaBar though, that’s when adding a GlassEffectContainer becomes important!
On Macs, do third-party displays without HDR reduce the rendering effects applied to Liquid Glass?
Thanks for the great question! Using a non-HDR display would not change the presence of effects applied to Liquid Glass. However, certain effects would be in standard dynamic range instead of high dynamic range.
Instruments 🛠️
Are Instruments limited only to physical devices? I’m a beginner with Instruments.
It depends! Some tools within Instruments support Simulator devices, while others require physical devices. That said, to get a representative metric of what your users will experience, we recommend profiling on a physical device. Ideally, you should always test on the oldest devices supported by your deployment target to understand lower-bound resource constraints.
What’s the difference between a “hitch” and a “hang” as it relates to the Instruments tool?
A hang is a noticeable delay in a discrete user interaction, and it’s almost always the result of long-running work on the main thread. Long-running work on the main thread can also cause hitches, but for hitches, the threshold is lower. Discrete interaction delays only start becoming noticeable as hangs when the main thread is unresponsive for about 50 ms to 100 ms or longer. However, a delay as small as the length of a single refresh interval — generally between 8 ms and 16 ms, depending on the refresh rate of the display — can cause a hitch. Delays in the render server can also cause a hitch, but usually aren’t long enough to cause a hang.
Foundation Model Framework 🦾
Can the Foundation Model understand different languages? For example, if my data and descriptions are in English, but I want the generated language to be Danish?
Foundation Models on-device system model is multilingual, supporting any language that Apple Intelligence supports. Your prompt and target language can differ. For learning more on how to handle localization, please check out “Support languages and locales with Foundation Models”
Can several apps use the on-device model concurrently, or is access allowed only by the foreground app? Also, is it correct to assume that a single shared copy of the model’s weights exists in memory, rather than one copy per process?
Weights will be shared across several processes. Multiple apps can use Foundation Models simultaneously, but requests can get serialized due to resource constraints, so response time can differ depending on the number of applications.
Is there a way to communicate with models via a binary protocol?
This isn’t supported today, but if that would be a useful feature for your use case, please capture that in Feedback Assistant!
Which is better for performance and response quality: using longer, more precise field names, or shorter names with longer @Guide descriptions?
The model will see all of the information you provide it, so I would encourage you to think of it in the same way you would if designing for a human to read. Think of the name like a variable name, and the guide like a doc comment. Would a human be more or less confused if you added more detail to the variable name? How about if that detail was in the doc comment instead?
How can you improve perceived performance with Foundation Models when you need JSON output responses? The demos showed plain-text responses, not structured output. Can a JSON response be streamed?
With structured textual data like JSON, you likely won’t get anything that parses correctly until the model is done streaming. If you just need structured data, you could instead try using a Generable type which contains all the information you need. Generable does support streaming and guarantees correctness at all phases of generation.
Does the increase in latency scale linearly with the amount of information in the context window for Foundation Models?
Yes! Latency does scale roughly linearly with context window size. Remember, though, that you can mitigate some of this latency by keeping the prefix of your prompt consistent so that you get the benefits of prefix caching.
Snap
What format did Snap export from their tool so that it could be ingested by Instruments? Where is this format documented?
It looks like Snap used a custom tool to visualize their trace files, and used signposts to add custom intervals to Instruments traces recorded at desk.
Acknowledgments 🏆
A huge thank-you to everyone who joined in and shared thoughtful, insightful, and engaging questions throughout the session — your curiosity and input made the discussion truly rich and collaborative.
Special thanks to:
Adam Hill, Brett Best, Giorgio Latour, Shakur Bost, Mustafa Khalil, Danny Khan, Jerald Abille, Jonathan Ballerano, Greg Sapienza, Mel Hsu, Kamil Chmielewski, Nicola, Mihaela Mihaljevic, Alexander Steiner, Antonios Keremidis, Brendan Duddridge, Alexander Degtyarev, Sheba Mayer, Alexander Bichurin, Igor Ryzhkov, Tanel Treuberg, Greg Cooksey, Rose Silicon, Sonia Ziv, and Faiq.
Finally, a heartfelt thank-you to the Apple team and moderators for leading session, sharing expert guidance, and offering such clear explanations of the apps optimization techniques. Your contributions made this session an outstanding learning experience for everyone involved.
One more thing…
Ever tried to explain “Yak Shaving,” “Spaghetti Code,” or “Imposter Syndrome”? Now you don’t have to — just send a sticker.
TecTalk turns everyday developer slang into fun, relatable stickers for your chats. Whether you’re venting about bugs or celebrating a successful deploy, there’s a sticker for every tech mood.
Created by me 🧑💻 — made by a dev, for devs — and available now at a very affordable price.
Express your inner techie. Stop typing. Start sticking.


