Introducing Lima

After nearly four years and over 500 stars on GitHub, I've decided it's time to retire MarkupKit. Despite a respectable level of developer interest, the idea of building an application using XML never seemed to fully resonate with the broader iOS community.

However, even in the absence of a markup-based implementation, the concept of declarative UI is still highly applicable. Today I am happy to introduce Lima, a new Swift-based DSL for constructing iOS and tvOS applications. The project's name comes from the nautical L or Lima flag, representing the first letter of the word "layout":

Lima retains most MarkupKit functionality, and improves on it in a number of ways:

  • Because it is written in Swift, UI code written using Lima is compiled. This means it is validated at build time rather than at run time. The lack of compile-time validation was a major drawback to the markup approach.

  • Further, since it is a Swift-based DSL, developers can finally take advantage of code completion. Although I experimented with a number of different approaches over the years, this is something I was never quite able to get working in XML.

  • Again, because it is written in Swift, Lima code is refactorable. It facilitates better code reuse, and allows developers to employ modern Xcode features like image and color asset literals in UI declarations. Lima also reduces overall file count, since a separate XML document is no longer required.

Converting markup to Lima syntax is straightforward. For example, given this markup:

<LMColumnView spacing="16">
    <UIImageView image="world.png"/>
    <UILabel text="Hello, World!"/>
</LMColumnView>

the Lima equivalent is as follows:

LMColumnView(spacing: 16,
    UIImage(image: UIImage(named: "world.png")),
    UILabel(text: "Hello, World!")
)

It's just as readable, and even slightly more concise, since there's no need for closing tags.

Thanks to everyone who has supported or contributed to MarkupKit. I'm hoping you will find Lima even more useful!

For more information, please see the project README.

Creating a Universal Framework in Xcode 10

11/13/2018 Updated for Xcode 10/Swift 4.2

The following script can be used to create a universal iOS framework (i.e. one that will run in both the simulator as well as on an actual device). It should work with both Swift and Objective-C projects:

FRAMEWORK=<framework name>

BUILD=build
FRAMEWORK_PATH=$FRAMEWORK.framework

# iOS
rm -Rf $FRAMEWORK-iOS/$BUILD
rm -f $FRAMEWORK-iOS.framework.tar.gz

xcodebuild archive -project $FRAMEWORK-iOS/$FRAMEWORK-iOS.xcodeproj -scheme $FRAMEWORK -sdk iphoneos SYMROOT=$BUILD
xcodebuild build -project $FRAMEWORK-iOS/$FRAMEWORK-iOS.xcodeproj -target $FRAMEWORK -sdk iphonesimulator SYMROOT=$BUILD

cp -RL $FRAMEWORK-iOS/$BUILD/Release-iphoneos $FRAMEWORK-iOS/$BUILD/Release-universal
cp -RL $FRAMEWORK-iOS/$BUILD/Release-iphonesimulator/$FRAMEWORK_PATH/Modules/$FRAMEWORK.swiftmodule/* $FRAMEWORK-iOS/$BUILD/Release-universal/$FRAMEWORK_PATH/Modules/$FRAMEWORK.swiftmodule

lipo -create $FRAMEWORK-iOS/$BUILD/Release-iphoneos/$FRAMEWORK_PATH/$FRAMEWORK $FRAMEWORK-iOS/$BUILD/Release-iphonesimulator/$FRAMEWORK_PATH/$FRAMEWORK -output $FRAMEWORK-iOS/$BUILD/Release-universal/$FRAMEWORK_PATH/$FRAMEWORK

tar -czv -C $FRAMEWORK-iOS/$BUILD/Release-universal -f $FRAMEWORK-iOS.tar.gz $FRAMEWORK_PATH $FRAMEWORK_PATH.dSYM

When located in the same directory as the .xcodeproj file, this script will invoke xcodebuild twice on a framework project and join the resulting binaries together into a single universal binary. It will then package the framework up in a gzipped tarball and place it in the same directory.

However, apps that contain “fat” binaries like this don't pass app store validation. Before submitting an app containing a universal framework, the binaries need to be trimmed so that they include only iOS-native code. The following script can be used to do this:

FRAMEWORK=$1
echo "Trimming $FRAMEWORK..."

FRAMEWORK_EXECUTABLE_PATH="${BUILT_PRODUCTS_DIR}/${FRAMEWORKS_FOLDER_PATH}/$FRAMEWORK.framework/$FRAMEWORK"

EXTRACTED_ARCHS=()

for ARCH in $ARCHS
do
    echo "Extracting $ARCH..."
    lipo -extract "$ARCH" "$FRAMEWORK_EXECUTABLE_PATH" -o "$FRAMEWORK_EXECUTABLE_PATH-$ARCH"
    EXTRACTED_ARCHS+=("$FRAMEWORK_EXECUTABLE_PATH-$ARCH")
done

echo "Merging binaries..."
lipo -o "$FRAMEWORK_EXECUTABLE_PATH-merged" -create "${EXTRACTED_ARCHS[@]}"
rm "${EXTRACTED_ARCHS[@]}"

rm "$FRAMEWORK_EXECUTABLE_PATH"
mv "$FRAMEWORK_EXECUTABLE_PATH-merged" "$FRAMEWORK_EXECUTABLE_PATH"

echo "Done."

To use this script:

  1. Place the script in your project root directory and name it trim.sh or something similar
  2. Create a new “Run Script” build phase after the “Embed Frameworks” phase
  3. Rename the new build phase to “Trim Framework Executables” or similar (optional)
  4. Invoke the script for each framework you want to trim (e.g. ${SRCROOT}/trim.sh)

For more ways to simplify iOS app development, please see my projects on GitHub:

  • Lima – Declarative UI for iOS and tvOS
  • Kilo – Lightweight REST for iOS and tvOS

Dynamically Loading Table View Images in iOS

11/13/2018 Updated for Xcode 10/Swift 4.2

iOS applications often display thumbnail images in table views alongside other text-based content such as contact names or product descriptions. However, these images are not usually delivered with the initial response, but must instead be retrieved separately afterward. They are typically downloaded in the background as needed to avoid blocking the main thread, which would temporarily render the user interface unresponsive.

For example, consider this web service, which returns a list of simulated photo data:

[
  {
    "albumId": 1,
    "id": 1,
    "title": "accusamus beatae ad facilis cum similique qui sunt",
    "url": "http://placehold.it/600/92c952",
    "thumbnailUrl": "http://placehold.it/150/92c952"
  },
  {
    "albumId": 1,
    "id": 2,
    "title": "reprehenderit est deserunt velit ipsam",
    "url": "http://placehold.it/600/771796",
    "thumbnailUrl": "http://placehold.it/150/771796"
  },
  {
    "albumId": 1,
    "id": 3,
    "title": "officia porro iure quia iusto qui ipsa ut modi",
    "url": "http://placehold.it/600/24f355",
    "thumbnailUrl": "http://placehold.it/150/24f355"
  },
  ...
]

Each record contains a photo ID, album ID, and title, as well as URLs for both thumbnail and full-size images; for example:

View Controller

A basic user interface for displaying results returned by this service is shown below:

Row data is stored in an array of Photo instances:

struct Photo: Decodable {
    let id: Int
    let albumId: Int
    let title: String?
    var url: URL?
    var thumbnailUrl: URL?
}

Previously loaded thumbnail images are stored in a dictionary that associates UIImage instances with photo IDs:

class ViewController: UITableViewController {
    // Row data
    var photos: [Photo]?

    // Image cache
    var thumbnailImages: [Int: UIImage] = [:]

    ...    
}

The photo list is loaded the first time the view appears. The WebServiceProxy class provided by the open-source Kilo framework is used to retrieve the data:

override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)

    // Load photo data
    if (photos == nil) {
        let serviceProxy = WebServiceProxy(session: URLSession.shared, serverURL: URL(string: "https://jsonplaceholder.typicode.com")!)

        serviceProxy.invoke(.get, path: "/photos") { (result: [Photo]?, error: Error?) in
            self.photos = result ?? []

            self.tableView.reloadData()
        }
    }
}

Table view cells are represented by the following class, implemented using the open-source Lima layout framework:

class PhotoCell: LMTableViewCell {
    var thumbnailImageView: UIImageView!
    var titleLabel: UILabel!

    override init(style: UITableViewCell.CellStyle, reuseIdentifier: String?) {
        super.init(style: style, reuseIdentifier: reuseIdentifier)

        setContent(LMRowView(
            UIImageView(contentMode: .scaleAspectFit, width: 50, height: 50) { self.thumbnailImageView = $0 },
            LMSpacer(width: 0.5, backgroundColor: UIColor.lightGray),
            LMColumnView(spacing: 0,
                UILabel(font: UIFont.preferredFont(forTextStyle: .body), numberOfLines: 2) { self.titleLabel = $0 },
                LMSpacer()
            )
        ), ignoreMargins: false)
    }

    required init?(coder decoder: NSCoder) {
        return nil
    }
}

Cell content is generated as follows. The corresponding Photo instance is retrieved from the photos array and used to configure the cell. If the thumbnail image is already available in the cache, it is used to populate the cell's thumbnail image view. Otherwise, it is loaded from the server and added to the cache. If the cell is still visible when the image request returns, it is updated immediately:

override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
    return photos?.count ?? 0
}

override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
    let photoCell = tableView.dequeueReusableCell(withIdentifier: PhotoCell.description(), for: indexPath) as! PhotoCell

    guard let photo = photos?[indexPath.row] else {
        fatalError()
    }

    // Attempt to load image from cache
    photoCell.thumbnailImageView.image = thumbnailImages[photo.id]

    if photoCell.thumbnailImageView.image == nil,
        let url = photo.thumbnailUrl,
        let scheme = url.scheme,
        let host = url.host,
        let serverURL = URL(string: String(format: "%@://%@", scheme, host)) {
        // Request image
        let serviceProxy = WebServiceProxy(session: URLSession.shared, serverURL: serverURL)

        serviceProxy.invoke(.get, path: url.path, responseHandler: { content, contentType in
            return UIImage(data: content)
        }) { (result: UIImage?, error: Error?) in
            // Add image to cache and update cell, if visible
            if let thumbnailImage = result {
                self.thumbnailImages[photo.id] = thumbnailImage

                if let cell = tableView.cellForRow(at: indexPath) as? PhotoCell {
                    cell.thumbnailImageView.image = thumbnailImage
                }
            }
        }
    }

    photoCell.titleLabel.text = photo.title

    return photoCell
}

Finally, if the system is running low on memory, the image cache is cleared:

override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()

    thumbnailImages.removeAll()
}

Summary

This article provided an overview of how images can be dynamically loaded to populate table view cells in iOS. Complete source code for this example can be found here.

Caching Web Service Response Data in iOS

11/13/2018 Updated for Xcode 10/Swift 4.2

Many iOS applications obtain data via web APIs that return JSON documents. For example, the following table view controller uses the Kilo WebServiceProxy class to invoke a simple web service that returns a simulated list of users as JSON. The controller requests the user list when the view first appears, and reloads the table view once the data has been retrieved:

class ViewController: UITableViewController {
    var users: [User]?

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
    
        // Load user data
        if (users == nil) {
            let serviceProxy = WebServiceProxy(session: URLSession.shared, serverURL: URL(string: "https://jsonplaceholder.typicode.com")!)
    
            serviceProxy.invoke(.get, path: "/users") { (result: [User]?, error: Error?) in
                if (error == nil) {
                    self.users = result ?? []
    
                    self.tableView.reloadData()
                }        
            }
        }
    }
    
    ...
}

User records are represented by instances of the following structure:

struct User: Codable {
    struct Address: Codable {
        let street: String
        let suite: String
        let city: String
        let zipcode: String

        struct Geo: Codable {
            let lat: String
            let lng: String
        }

        let geo: Geo
    }

    struct Company: Codable {
        let name: String
        let catchPhrase: String
        let bs: String
    }

    let id: Int
    let name: String
    let username: String
    let email: String
    let address: Address
    let phone: String
    let website: String
    let company: Company
}

The results are shown below:

This works fine when both the device and the service are online, but it fails if either one is not. In some cases this may be acceptable, but other times it might be preferable to show the user the most recent response when more current data is not available.

To facilitate offline support, the response data must be cached. However, since writing to the file system is a potentially time-consuming operation, it should be done in the background to avoid blocking the main (UI) thread. Here, the data is written using an operation queue to ensure that access to it is serialized:

class ViewController: UITableViewController {
    var userCacheURL: URL?
    let userCacheQueue = OperationQueue()

    var users: [User]?

    override func viewDidLoad() {
        super.viewDidLoad()

        title = "Response Data Cache"

        tableView.estimatedRowHeight = 2
        tableView.register(UserCell.self, forCellReuseIdentifier: UserCell.description())

        if let cacheURL = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first {
            userCacheURL = cacheURL.appendingPathComponent("users.json")
        }
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Load user data
        if (users == nil) {
            let serviceProxy = WebServiceProxy(session: URLSession.shared, serverURL: URL(string: "https://jsonplaceholder.typicode.com")!)

            serviceProxy.invoke(.get, path: "/users") { (result: [User]?, error: Error?) in
                if (error == nil) {
                    self.users = result ?? []

                    self.tableView.reloadData()

                    // Write the response to the cache
                    if let userCacheURL = self.userCacheURL {
                        self.userCacheQueue.addOperation() {
                            let jsonEncoder = JSONEncoder()

                            if let data = try? jsonEncoder.encode(self.users) {
                                try? data.write(to: userCacheURL)
                            }
                        }
                    }
                } else {
                    ...
                }
            }
        }
    }
    
    ...
}

Finally, the data can be retrieved from the cache if the web service call fails. The data is read from the cache in the background, and the UI is updated by reloading the table view on the main thread:

class ViewController: UITableViewController {
    ...

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Load user data
        if (users == nil) {
            let serviceProxy = WebServiceProxy(session: URLSession.shared, serverURL: URL(string: "https://jsonplaceholder.typicode.com")!)

            serviceProxy.invoke(.get, path: "/users") { (result: [User]?, error: Error?) in
                if (error == nil) {
                    ...
                } else {
                    // Read the data from the cache
                    if let userCacheURL = self.userCacheURL {
                        self.userCacheQueue.addOperation() {
                            let jsonDecoder = JSONDecoder()

                            if let data = try? Data(contentsOf: userCacheURL) {
                                self.users = (try? jsonDecoder.decode([User].self, from: data)) ?? []

                                // Update the UI
                                OperationQueue.main.addOperation() {
                                    self.tableView.reloadData()
                                }
                            }
                        }
                    }
                }
            }
        }
    }
    
    ...
}

Now, as long as the application has been able to connect to the server at least once, it can function either online or offline, using the cached response data.

Complete source code for this example can be found here.

Applying Style Sheets Client-Side in iOS

11/13/2018 Updated for Xcode 10/Swift 4.2

While native mobile applications can often provide a more seamless and engaging user experience than a browser-based app, it is occasionally convenient to present certain types of content using a web view. Specifically, any content that is primarily text-based and requires minimal user interaction may be a good candidate for presentation as HTML; for example, product descriptions, user reviews, or instructional content.

However, browser-based content often tends to look out of place within a native app. For example, consider the following simple HTML document:

<html>
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="initial-scale=1.0"/>
</head>
<body>
    <h1>Lorem Ipsum</h1>
    <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p>
    <ul>
    <li>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</li> 
    <li>Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.</li>
    <li>Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</li>
    </ul>
</body>
</html>

Rendered by WKWebView, the result looks like this:

Because the text is displayed using the default browser font rather than the system font, it is immediately obvious that the content is not native. To make it appear more visually consistent with other elements of the user interface, a stylesheet could be used to render the document using the system font:

<head>
    ...
    <style>
    body {
        font-family: '-apple-system';
        font-size: 10pt;
    }
    </style>
</head>

The result is shown below. The styling of the text now matches the rest of the UI:

However, while this approach may work for this simple example, it does not scale well. Different app (or OS) versions may have different styling requirements.

By applying the stylesheet on the client, the presentation can be completely separated from the content. This can be accomplished by linking to the stylesheet rather than embedding it inline:

<head>
    ...
    <link rel="stylesheet" type="text/css" href="example.css"/>
</head>

However, instead of downloading the stylesheet along with the HTML document, it is distributed with the application itself and applied using the load(_:mimeType:characterEncodingName:baseURL:) method of the WKWebView class. The first argument to this method contains the (unstlyed) HTML content, and the last contains the base URL against which relative URLs in the document (such as stylesheets) will be resolved:

class ViewController: UIViewController {
    var webView: WKWebView!

    override func loadView() {
        webView = WKWebView()

        view = webView
    }

    override func viewDidLoad() {
        super.viewDidLoad()

        title = "Client-Side CSS"

        if let url = Bundle.main.url(forResource: "example", withExtension: "html"),
            let data = try? Data(contentsOf: url),
            let resourceURL = Bundle.main.resourceURL {
            webView.load(data, mimeType: "text/html", characterEncodingName: "UTF-8", baseURL: resourceURL)
        }
    }
}

In this example, the document, example.html, is loaded from the main bundle. In a real application, it would most likely be loaded from an actual web server.

The stylesheet, example.css, is stored in the resource folder of the application's main bundle:

body {
    font-family: '-apple-system';
    font-size: 10pt;
}

The results are identical to the previous example. However, the content and visual design are no longer tightly coupled and can vary independently:

Complete source code for this example can be found here.

Building a Simple Barcode Scanner in iOS

11/13/2018 Updated for Xcode 10/Swift 4.2

Although near-field communication (NFC) technologies such as Apple Pay are continuing to gain traction as a means of inter-device communication, optical mechanisms such as barcodes (both 1D and 2D) are still widely used across a broad range of industries.

This tutorial demonstrates how to easily incorporate barcode scanning functionality into an iOS application. The sample application will use the AVFoundation framework to capture and analyze barcode images using the device's camera.

Create the Xcode Project

The first step is to create the Xcode project we'll be using to build the example app.

  • Open Xcode and select File | New | Project from the menu.
  • In the project template dialog, select iOS > Single View Application and click "Next".
  • Name the product "BarcodeScanner" and fill in the remaining fields as appropriate for your team and organization. Ensure that Swift is selected as the development language and click "Next".
  • Save the project to an appropriate location on your system.

Although it doesn't actually do anything yet, you should now be able to run the application by selecting your device in the toolbar and clicking the "Run" button or by pressing Command-R. Note that, since the application will use the camera, it needs to be run on an actual device and must be signed. Make sure that an appropriate development team is selected in the Signing section of the General tab for the "BarcodeScanner" target before attempting to run the app.

Add the CameraView Class

Before we can display a camera preview to the user, we need to create a class to represent the camera view:

  • Select ViewController.swift in the Project Navigator.
  • Add the following line to the imports section:
import AVFoundation
  • Add the following class declaration immediately before the ViewController class that was automatically generated by Xcode:
class CameraView: UIView {
    override class var layerClass: AnyClass {
        get {
            return AVCaptureVideoPreviewLayer.self
        }
    }

    override var layer: AVCaptureVideoPreviewLayer {
        get {
            return super.layer as! AVCaptureVideoPreviewLayer
        }
    }

    func updateOrientation() {
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
        case .portrait:
            videoOrientation = .portrait

        case .portraitUpsideDown:
            videoOrientation = .portraitUpsideDown

        case .landscapeLeft:
            videoOrientation = .landscapeRight

        case .landscapeRight:
            videoOrientation = .landscapeLeft

        default:
            videoOrientation = .portrait
        }

        layer.connection?.videoOrientation = videoOrientation
    }
}

This class extends UIView and overrides the layerClass property to specify that it should be backed by an instance of AVCaptureVideoPreviewLayer. It also overrides the layer property to cast the return value to AVCaptureVideoPreviewLayer, which will make it easier to access the layer's properties later.

Finally, the class declares an updateOrientation() method that synchronizes the video orientation of the layer's capture connection with the device orientation. This method will be called by the view controller to initialize the view and to update it when the device orientation changes.

Add the Camera View to the View Controller

Next, we'll add the camera view to the view controller:

  • In the ViewController class, declare a member variable to contain the camera view. Since we'll be creating the view instance programmatically, we don't need to tag it as an outlet:
var cameraView: CameraView!
  • Override the loadView() method to initialize the view:
override func loadView() {
    cameraView = CameraView()

    view = cameraView
}

Although the camera view will now be visible when we run the app, it won't yet show anything but a black rectangle. We'll fix this in the next section.

Configure the Capture Session

In order to get the camera view to actually reflect what the camera is seeing, we need to connect it to an AV capture session. We'll use a dispatch queue to execute the more expensive session operations so the UI isn't blocked while waiting for them to complete:

  • Add member variables for the capture session and dispatch queue to ViewController:
let session = AVCaptureSession()
let sessionQueue = DispatchQueue(label: "Session Queue")
  • Add the AVCaptureMetadataOutputObjectsDelegate protocol to the view controller class:
class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
    ...
} 
  • Update the viewDidLoad() method to initialize the capture session and initialize the camera orientation:
override func viewDidLoad() {
    super.viewDidLoad()

    title = "Barcode Scanner"
    
    session.beginConfiguration()

    if let videoDevice = AVCaptureDevice.default(for: .video) {
        if let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice),
            session.canAddInput(videoDeviceInput) {
            session.addInput(videoDeviceInput)
        }

        let metadataOutput = AVCaptureMetadataOutput()

        if (session.canAddOutput(metadataOutput)) {
            session.addOutput(metadataOutput)

            metadataOutput.metadataObjectTypes = [
                .code128,
                .code39,
                .code93,
                .ean13,
                .ean8,
                .qr,
                .upce
            ]

            metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        }
    }

    session.commitConfiguration()

    cameraView.layer.session = session
    cameraView.layer.videoGravity = .resizeAspectFill

    cameraView.updateOrientation()
}

Add Camera Usage Description to Info.plist

Access to the camera in an iOS application requires the user's permission. In order for iOS to ask for permission, we need to provide a string explaining what the application plans to do with the camera:

  • Add the camera usage description to Info.plist:
<key>NSCameraUsageDescription</key>
<string>to scan barcodes</string>

The application still doesn't do much, but it will now at least prompt the user for permission to access the camera:

Start and Stop the Capture Session

In order for the application to actually display what the camera is seeing, we need to start the capture session. We'll do this when the view appears. We'll also stop the session when the view disappears:

  • Add the following methods to ViewController to start and stop session capture:
override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)

    sessionQueue.async {
        self.session.startRunning()
    }
}

override func viewWillDisappear(_ animated: Bool) {
    super.viewWillDisappear(animated)

    sessionQueue.async {
        self.session.stopRunning()
    }
}

While it isn't capable of scanning barcodes yet, the application will now at least correctly show the camera preview:

Handle Orientation Changes

Although it now displays the preview, the application doesn't yet respond to changes in orientation. Next, we'll add code to update the camera orientation when the device is rotated:

  • Add the following method to ViewController to update the preview orientation when the device orientation changes:
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
    super.viewWillTransition(to: size, with: coordinator)

    cameraView.updateOrientation()
}

Now, when the device is rotated, the preview will reflect the correct orientation.

Capture Barcode Values

Finally, we're ready to add the code that actually captures barcode values. We'll do this using the metadataOutput(_:didOutput:from:) method of the AVCaptureMetadataOutputObjectsDelegate protocol:

  • First, add the following property to ViewController:
var isShowingAlert = false
  • Next, add this method:
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
    if !isShowingAlert,
        metadataObjects.count > 0,
        metadataObjects.first is AVMetadataMachineReadableCodeObject,
        let scan = metadataObjects.first as? AVMetadataMachineReadableCodeObject {
        let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)

        isShowingAlert = true

        alertController.addAction(UIAlertAction(title: "OK", style: .default) { action in
            self.isShowingAlert = false
        })

        present(alertController, animated: true)
    }
}

When a barcode is recognized, the application will now extract the associated value and present it to the user in an alert view:

Summary

This tutorial demonstrated how to incorporate barcode scanning functionality into an iOS application using the AVFoundation framework. Complete source code for this example can be found here.