On the upcoming WWDC, Apple is predicted to announce an on-device massive language mannequin (LLM). The subsequent model of the iOS SDK will seemingly make it simpler for builders to combine AI options into their apps. Whereas we await Appleās debut of its personal Generative AI fashions, firms like OpenAI and Google already present SDKs for iOS builders to include AI options into cell apps. On this tutorial, we are going to discover Google Gemini, previously generally known as Bard, and display easy methods to use its API to construct a easy SwiftUI app.
We’re set to construct a Q&A app that makes use of the Gemini API. The app incorporates a easy UI with a textual content subject for customers to enter their questions. Behind the scenes, we are going to ship the personās query to Google Gemini to retrieve the reply.

Please word that you must use Xcode 15 (or up) to observe this tutorial.
Getting Began with Google Gemini APIs
Assuming that you just havenāt labored with Gemini, the very very first thing is to go as much as get an API key for utilizing the Gemini APIs. To create one, you’ll be able to go as much as Google AI Studio and click on the Create API key button.

Utilizing Gemini APIs in Swift Apps
You must now have created the API key. Weāll use this in our Xcode undertaking. Open Xcode and create a brand new SwiftUI undertaking, which Iāll name GeminiDemo. To retailer the API key, create a property file named GeneratedAI-Information.plist
. On this file, create a key named API_KEY
and enter your API key as the worth.

To learn the API key from the property file, create one other Swift file named APIKey.swift
. Add the next code to this file:
guard let filePath = Bundle.principal.path(forResource: “GenerativeAI-Information”, ofType: “plist”)
else {
fatalError(“Could not discover file ‘GenerativeAI-Information.plist’.”)
}
let plist = NSDictionary(contentsOfFile: filePath)
guard let worth = plist?.object(forKey: “API_KEY”) as? String else {
fatalError(“Could not discover key ‘API_KEY’ in ‘GenerativeAI-Information.plist’.”)
}
if worth.begins(with: “_”) {
fatalError(
“Observe the directions at https://ai.google.dev/tutorials/setup to get an API key.”
)
}
return worth
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
enum APIKey { Ā Ā // Fetch the API key from `GenerativeAI-Information.plist` Ā Ā static var `default`: String { Ā Ā Ā Ā Ā guard let filePath = Bundle.principal.path(forResource: “GenerativeAI-Information”, ofType: “plist”) Ā Ā Ā Ā else { Ā Ā Ā Ā Ā Ā fatalError(“Could not discover file ‘GenerativeAI-Information.plist’.”) Ā Ā Ā Ā } Ā Ā Ā Ā Ā let plist = NSDictionary(contentsOfFile: filePath) Ā Ā Ā Ā Ā guard let worth = plist?.object(forKey: “API_KEY”) as? String else { Ā Ā Ā Ā Ā Ā fatalError(“Could not discover key ‘API_KEY’ in ‘GenerativeAI-Information.plist’.”) Ā Ā Ā Ā } Ā Ā Ā Ā Ā if worth.begins(with: “_”) { Ā Ā Ā Ā Ā Ā fatalError( Ā Ā Ā Ā Ā Ā Ā Ā “Observe the directions at https://ai.google.dev/tutorials/setup to get an API key.” Ā Ā Ā Ā Ā Ā ) Ā Ā Ā Ā } Ā Ā Ā Ā Ā return worth Ā Ā } } |
When you determine to make use of a special title for the property file as an alternative of the unique āGenerativeAI-Information.plistā, you have to to switch the code in your āAPIKey.swiftā file. This modification is important as a result of the code references the precise filename when fetching the API key. So, any change within the property file title ought to be mirrored within the code to make sure the profitable retrieval of the API key.
Including the SDK Utilizing Swift Bundle
The Google Gemini SDK is well accessible as a Swift Bundle, making it easy so as to add to your Xcode undertaking. To do that, right-click the undertaking folder within the undertaking navigator and choose Add Bundle Dependencies. Within the dialog, enter the next bundle URL:
https://github.com/google/generative-ai-swift |
You may then click on on the Add Bundle button to obtain and incorporate theĀ GoogleGenerativeAI
Ā bundle into the undertaking.
Constructing the App UI
Letās begin with the UI. Itās easy, with solely a textual content subject for person enter and a label to show responses from Google Gemini.
Open ContentView.swift
and declare the next properties:
@State personal var isThinking = false
@State personal var textInput = “” @State personal var response: LocalizedStringKey = “Good day! How can I provide help to in the present day?” Ā @State personal var isThinking = false |
The textInput
variable is used to seize person enter from the textual content subject. The response
variable shows the APIās returned response. Given the APIās response time, we embody an isThinking
variable to watch the standing and present animated results.
For the physique
variable, substitute it with the next code to create the person interface:
ScrollView {
VStack {
Textual content(response)
.font(.system(.title, design: .rounded, weight: .medium))
.opacity(isThinking ? 0.2 : 1.0)
}
}
.contentMargins(.horizontal, 15, for: .scrollContent)
Spacer()
HStack {
TextField(“Kind your message right here”, textual content: $textInput)
.textFieldStyle(.plain)
.padding()
.background(Shade(.systemGray6))
.clipShape(RoundedRectangle(cornerRadius: 20))
}
.padding(.horizontal)
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
VStack(alignment: .main) { Ā Ā Ā Ā Ā ScrollView { Ā Ā Ā Ā Ā Ā Ā Ā VStack { Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Textual content(response) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .font(.system(.title, design: .rounded, weight: .medium)) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .opacity(isThinking ? 0.2 : 1.0) Ā Ā Ā Ā Ā Ā Ā Ā } Ā Ā Ā Ā } Ā Ā Ā Ā .contentMargins(.horizontal, 15, for: .scrollContent) Ā Ā Ā Ā Ā Spacer() Ā Ā Ā Ā Ā HStack { Ā Ā Ā Ā Ā Ā Ā Ā Ā TextField(“Kind your message right here”, textual content: $textInput) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .textFieldStyle(.plain) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .padding() Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .background(Shade(.systemGray6)) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā .clipShape(RoundedRectangle(cornerRadius: 20)) Ā Ā Ā Ā Ā } Ā Ā Ā Ā .padding(.horizontal) } |
The code is sort of easy, particularly when you have some expertise with SwiftUI. After making the modifications, you must see the next person interface within the preview.

Integrating with Google Gemini
Earlier than you should use the Google Gemini APIs, you first have to import the GoogleGenerativeAI
module:
import GoogleGenerativeAI |
Subsequent, declare a mannequin
variable and initialize the Generative mannequin like this:
let mannequin = GenerativeModel(title: “gemini-pro”, apiKey: APIKey.default) |
Right here, we make the most of the gemini-pro
mannequin, which is particularly designed to generate textual content from textual content enter.
To ship the textual content to Google Gemini, letās create a brand new operate referred to as sendMessage()
:
withAnimation(.easeInOut(period: 0.6).repeatForever(autoreverses: true)) {
isThinking.toggle()
}
Activity {
do {
let generatedResponse = attempt await mannequin.generateContent(textInput)
guard let textual content = generatedResponse.textual content else {
textInput = “Sorry, Gemini obtained some issues.nPlease attempt once more later.”
return
}
textInput = “”
response = LocalizedStringKey(textual content)
isThinking.toggle()
} catch {
response = “One thing went improper!n(error.localizedDescription)”
}
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
func sendMessage() { Ā Ā Ā Ā response = “Pondering…” Ā Ā Ā Ā Ā withAnimation(.easeInOut(period: 0.6).repeatForever(autoreverses: true)) { Ā Ā Ā Ā Ā Ā Ā Ā isThinking.toggle() Ā Ā Ā Ā } Ā Ā Ā Ā Ā Activity { Ā Ā Ā Ā Ā Ā Ā Ā do { Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā let generatedResponse = attempt await mannequin.generateContent(textInput) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā guard let textual content = generatedResponse.textual content elseĀ Ā { Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā textInput = “Sorry, Gemini obtained some issues.nPlease attempt once more later.” Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā return Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā } Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā textInput = “” Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā response = LocalizedStringKey(textual content) Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā isThinking.toggle() Ā Ā Ā Ā Ā Ā Ā Ā } catch { Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā response = “One thing went improper!n(error.localizedDescription)“ Ā Ā Ā Ā Ā Ā Ā Ā } Ā Ā Ā Ā } } |
As you’ll be able to see from the code above, you solely have to name the generateContent methodology of the mannequin to enter textual content and obtain the generated response. The result’s in Markdown format, so we use LocalizedStringKey
to wrap the returned textual content.
To name the sendMessage()
operate, replace the TextField
view and connect the onSubmit
modifier to it:
TextField(“Kind your message right here”, textual content: $textInput) Ā Ā Ā Ā .textFieldStyle(.plain) Ā Ā Ā Ā .padding() Ā Ā Ā Ā .background(Shade(.systemGray6)) Ā Ā Ā Ā .clipShape(RoundedRectangle(cornerRadius: 20)) Ā Ā Ā Ā .onSubmit { Ā Ā Ā Ā Ā Ā Ā Ā sendMessage() Ā Ā Ā Ā } |
On this scenario, when the person finishes inputting the textual content and presses the return key, the sendMessage()
operate is named to submit the textual content to Google Gemini.
Thatās it! Now you can run the app in a simulator or execute it immediately within the preview to check the AI characteristic.

Abstract
This tutorial exhibits easy methods to combine Google Gemini AI right into a SwiftUI app. It solely requires a number of strains of code to allow your app with Generative AI options. On this demo, we use the gemini-pro
mannequin to generate textual content from text-only enter.
Nevertheless, the capabilities of Gemini AI will not be simply restricted to text-based enter. Gemini additionally affords a multimodal mannequin named gemini-pro-vision
that enables builders to enter each textual content and pictures. We encourage you to take full benefit of this tutorial by modifying the offered code and experimenting with it.
When you’ve got any questions concerning the tutorial, please let me know by leaving a remark beneath.