Building a Simple Barcode Scanner in iOS

Although near-field communication (NFC) technologies such as Apple Pay are beginning to gain traction as a means of inter-device communication, visual communication mechanisms such as barcodes (both 1D and 2D) are still widely used across a broad range of industries.

This tutorial demonstrates how to easily incorporate barcode scanning functionality into an iOS application. The sample application will use the AVFoundation framework to capture and analyze barcode images using the device's camera. iOS 10, macOS 10.12, and Xcode 8 are required.

Create the Xcode Project

The first step is to create the Xcode project we'll be using to build the example app.

  • Open Xcode and select File | New | Project from the menu.
  • In the project template dialog, select iOS > Single View Application and click "Next".
  • Name the product "BarcodeScanner" and fill in the remaining fields as appropriate for your team and organization. Ensure that Swift is selected as the development language and click "Next".
  • Save the project to an appropriate location on your system.

Although it doesn't actually do anything yet, you should now be able to run the application by selecting your device in the toolbar and clicking the "Run" button or by pressing Command-R. Note that, since the application will use the camera, it needs to be run on an actual device and must be signed. Make sure that an appropriate development team is selected in the Signing section of the General tab for the "BarcodeScanner" target before attempting to run the app.

Add the CameraView Class

Before we can display the camera preview to the user, we need to create a class to represent the camera view.

  • Select ViewController.swift in the Project Navigator.
  • Add the following line to the imports section:
    import AVFoundation
  • Add the following class declaration immediately before the ViewController class that was automatically generated by Xcode:
    class CameraView: UIView {
        override class var layerClass: AnyClass {
            get {
                return AVCaptureVideoPreviewLayer.self
            }
        }
    
        override var layer: AVCaptureVideoPreviewLayer {
            get {
                return super.layer as! AVCaptureVideoPreviewLayer
            }
        }
    }

This class extends UIView and overrides the layerClass property to specify that the view will be backed by an instance of AVCaptureVideoPreviewLayer. It also overrides the layer property to cast the return value to AVCaptureVideoPreviewLayer. This will make it easier to access the properties of the preview layer later.

Add the Camera View to the View Controller

Next, we'll add the camera view to the view controller.

  • In the ViewController class, declare a member variable to contain the camera view. Since we'll be creating the view instance programmatically, we don't need to tag it as an outlet:
    var cameraView: CameraView!
  • Override the loadView() method to initialize the view:
    override func loadView() {
        cameraView = CameraView()
    
        view = cameraView
    }

Although the camera view will now be visible when we run the app, it won't yet show anything but a black rectangle. We'll fix this in the next section.

Configure the Capture Session

In order to get the camera view to actually reflect what the camera is seeing, we need to connect it to an AV capture session. We'll use a dispatch queue to execute the more expensive session operations so the UI isn't blocked while waiting for them to complete.

  • Add member variables for the capture session and dispatch queue to ViewController:
    let session = AVCaptureSession()
    let sessionQueue = DispatchQueue(label: AVCaptureSession.self.description(), attributes: [], target: nil)
  • Add the AVCaptureMetadataOutputObjectsDelegate protocol to the view controller class:
    class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate { 
        ...
    } 
  • Add the following code to viewDidLoad() to initialize the capture session. For this example, we'll be configuring the session to recognize two barcode types – EAN-13 (aka "UPC") codes and QR codes:
    session.beginConfiguration()
    
    let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    
    if (videoDevice != nil) {
        let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice)
    
        if (videoDeviceInput != nil) {
            if (session.canAddInput(videoDeviceInput)) {
                session.addInput(videoDeviceInput)
            }
        }
    
        let metadataOutput = AVCaptureMetadataOutput()
    
        if (session.canAddOutput(metadataOutput)) {
            session.addOutput(metadataOutput)
    
            metadataOutput.metadataObjectTypes = [
                AVMetadataObjectTypeEAN13Code,
                AVMetadataObjectTypeQRCode
            ]
    
            metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        }
    }
    
    session.commitConfiguration()
    
    cameraView.layer.session = session
    cameraView.layer.videoGravity = AVLayerVideoGravityResizeAspectFill
  • Add the following additional code to viewDidLoad() to set the initial camera orientation:
    let videoOrientation: AVCaptureVideoOrientation
    switch UIApplication.shared.statusBarOrientation {
        case .portrait:
            videoOrientation = .portrait
    
        case .portraitUpsideDown:
            videoOrientation = .portraitUpsideDown
    
        case .landscapeLeft:
            videoOrientation = .landscapeLeft
    
        case .landscapeRight:
            videoOrientation = .landscapeRight
    
        default:
            videoOrientation = .portrait
    }
    
    cameraView.layer.connection.videoOrientation = videoOrientation

Add Camera Usage Description to Info.plist

Use of the camera in an iOS application requires the user's permission. In order for iOS to ask for permission, we need to provide a string explaining what the application plans to do with the camera.

  • Add the camera usage description to Info.plist:
        <key>NSCameraUsageDescription</key>
        <string>to scan barcodes</string>

The application still doesn't do much, but it will now at least prompt the user for permission to access the camera:

Start and Stop the Capture Session

In order for the application to actually display what the camera is seeing, we need to start the capture session. We'll do this when the view appears. We'll also stop the session when the view disappears.

  • Add the following methods to ViewController to start and stop session capture:
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
    
        sessionQueue.async {
            self.session.startRunning()
        }
    }
    
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
    
        sessionQueue.async {
            self.session.stopRunning()
        }
    }

While it isn't capable of scanning barcodes yet, the application will now at least correctly show the camera preview:

Handle Orientation Changes

Although it now displays the preview, the application doesn't yet respond to changes in orientation. Next, we'll add code to update the camera orientation when the device is rotated.

  • Add the following method to ViewController to update the preview orientation when the device orientation changes:
    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        super.viewWillTransition(to: size, with: coordinator)
    
        // Update camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
            case .portrait:
                videoOrientation = .portrait
    
            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown
    
            case .landscapeLeft:
                videoOrientation = .landscapeRight
    
            case .landscapeRight:
                videoOrientation = .landscapeLeft
    
            default:
                videoOrientation = .portrait
        }
    
        cameraView.layer.connection.videoOrientation = videoOrientation
    }

Now, when the device is rotated, the preview will reflect the correct orientation.

Capture Barcode Values

Finally, we're ready to add the code that actually captures barcode values. We'll do this using the captureOutput(_:didOutputMetadataObjects:from:) method of the AVCaptureMetadataOutputObjectsDelegate protocol.

  • Add the following method to ViewController:
    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
        if (metadataObjects.count > 0 && metadataObjects.first is AVMetadataMachineReadableCodeObject) {
            let scan = metadataObjects.first as! AVMetadataMachineReadableCodeObject
    
            let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)
    
            alertController.addAction(UIAlertAction(title: "OK", style: .default, handler:nil))
    
            present(alertController, animated: true, completion: nil)
        }
    }

When a barcode is recognized, the application will now extract the associated value and present it to the user in an alert view:

Summary

This tutorial demonstrated how to easily incorporate barcode scanning functionality into an iOS application using the AVFoundation framework. The complete source code for the example ViewController class should look something like the following:

import UIKit
import AVFoundation

class CameraView: UIView {
    override class var layerClass: AnyClass {
        get {
            return AVCaptureVideoPreviewLayer.self
        }
    }

    override var layer: AVCaptureVideoPreviewLayer {
        get {
            return super.layer as! AVCaptureVideoPreviewLayer
        }
    }
}

class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
    // Camera view
    var cameraView: CameraView!

    // AV capture session and dispatch queue
    let session = AVCaptureSession()
    let sessionQueue = DispatchQueue(label: AVCaptureSession.self.description(), attributes: [], target: nil)

    override func loadView() {
        cameraView = CameraView()

        view = cameraView
    }

    override func viewDidLoad() {
        super.viewDidLoad()

        session.beginConfiguration()

        let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

        if (videoDevice != nil) {
            let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice)

            if (videoDeviceInput != nil) {
                if (session.canAddInput(videoDeviceInput)) {
                    session.addInput(videoDeviceInput)
                }
            }

            let metadataOutput = AVCaptureMetadataOutput()

            if (session.canAddOutput(metadataOutput)) {
                session.addOutput(metadataOutput)

                metadataOutput.metadataObjectTypes = [
                    AVMetadataObjectTypeEAN13Code,
                    AVMetadataObjectTypeQRCode
                ]

                metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
            }
        }

        session.commitConfiguration()

        cameraView.layer.session = session
        cameraView.layer.videoGravity = AVLayerVideoGravityResizeAspectFill

        // Set initial camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIApplication.shared.statusBarOrientation {
            case .portrait:
                videoOrientation = .portrait

            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown

            case .landscapeLeft:
                videoOrientation = .landscapeLeft

            case .landscapeRight:
                videoOrientation = .landscapeRight

            default:
                videoOrientation = .portrait
        }

        cameraView.layer.connection.videoOrientation = videoOrientation
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Start AV capture session
        sessionQueue.async {
            self.session.startRunning()
        }
    }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)

        // Stop AV capture session
        sessionQueue.async {
            self.session.stopRunning()
        }
    }

    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        super.viewWillTransition(to: size, with: coordinator)

        // Update camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
            case .portrait:
                videoOrientation = .portrait

            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown

            case .landscapeLeft:
                videoOrientation = .landscapeRight

            case .landscapeRight:
                videoOrientation = .landscapeLeft

            default:
                videoOrientation = .portrait
        }

        cameraView.layer.connection.videoOrientation = videoOrientation
    }

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
        // Display barcode value
        if (metadataObjects.count > 0 && metadataObjects.first is AVMetadataMachineReadableCodeObject) {
            let scan = metadataObjects.first as! AVMetadataMachineReadableCodeObject

            let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)

            alertController.addAction(UIAlertAction(title: "OK", style: .default, handler:nil))

            present(alertController, animated: true, completion: nil)
        }
    }
}

Printing Continuous Content in iOS

I’ve recently been working on an application that needs to generate printed receipts, and I’ve been using AirPrint to handle the output. Overall, I’ve found that AirPrint works really well, and I’ve had little trouble incorporating it into my app.

However, one challenge I’ve run into is producing continuous content. AirPrint seems to be geared more towards paginated content, and it doesn’t appear to work particularly well with the roll-based print media typically found in receipt printers.

After struggling with this for the better part of a day, I finally came up with this solution:

class ContinuousPageRenderer : UIPrintPageRenderer, UIPrintInteractionControllerDelegate {
    let attributedText: NSAttributedString

    let margin: CGFloat = 72.0 * 0.125

    init(attributedText: NSAttributedString) {
        self.attributedText = attributedText

        super.init()

        let printFormatter = UISimpleTextPrintFormatter(attributedText: attributedText)

        printFormatter.perPageContentInsets = UIEdgeInsets(top: margin, left: margin, bottom: margin, right: margin)

        addPrintFormatter(printFormatter, startingAtPageAt: 0)
    }

    func printInteractionController(_ printInteractionController: UIPrintInteractionController, cutLengthFor paper: UIPrintPaper) -> CGFloat {
        let size = CGSize(width: paper.printableRect.width - margin * 2, height: 0)

        let boundingRect = attributedText.boundingRect(with: size, options: [
            .usesLineFragmentOrigin,
            .usesFontLeading
        ], context: nil)

        return boundingRect.height + margin * 2
    }
}

This class provides a renderer for producing continuous output based on the content of an attributed string. Internally, it uses an instance of UISimpleTextPrintFormatter to format the output. A 1/8″ border, represented by the margin constant, is established around the generated content.

The class conforms to the UIPrintInteractionControllerDelegate protocol and provides an implementation for the printInteractionController(_:cutLengthFor:) method, which, despite its somewhat misleading name, actually appears to control the length of the generated page. In this case, the cut length is determined by calculating the bounding rectangle of the attributed text using the current printable area minus the page margins.

In order to correctly calculate the page size, a instance of this class must be set both as the print page renderer and the delegate of the print interaction controller; for example:

// Generate attributed text
let attributedText = NSMutableAttributedString()

let text = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\n"
let attributes = [NSFontAttributeName: UIFont.systemFont(ofSize: 10)]

attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))

// Print attributed text
let continuousPageRenderer = ContinuousPageRenderer(attributedText: attributedText)

printInteractionController.printPageRenderer = continuousPageRenderer
printInteractionController.delegate = continuousPageRenderer

Without the printInteractionController(_:cutLengthFor:) method, AirPrint attempts to break the content up into pages that, on my system, appear to have the same aspect ratio as “US Letter” (8 1/2 x 11) stock:

However, with the delegate method, the page size is correctly determined based on the length of the content: