Building a Simple Barcode Scanner in iOS

Although near-field communication (NFC) technologies such as Apple Pay are beginning to gain traction as a means of inter-device communication, visual communication mechanisms such as barcodes (both 1D and 2D) are still widely used across a broad range of industries.

This tutorial demonstrates how to easily incorporate barcode scanning functionality into an iOS application. The sample application will use the AVFoundation framework to capture and analyze barcode images using the device's camera. iOS 10, macOS 10.12, and Xcode 8 are required.

Create the Xcode Project

The first step is to create the Xcode project we'll be using to build the example app.

  • Open Xcode and select File | New | Project from the menu.
  • In the project template dialog, select iOS > Single View Application and click "Next".
  • Name the product "BarcodeScanner" and fill in the remaining fields as appropriate for your team and organization. Ensure that Swift is selected as the development language and click "Next".
  • Save the project to an appropriate location on your system.

Although it doesn't actually do anything yet, you should now be able to run the application by selecting your device in the toolbar and clicking the "Run" button or by pressing Command-R. Note that, since the application will use the camera, it needs to be run on an actual device and must be signed. Make sure that an appropriate development team is selected in the Signing section of the General tab for the "BarcodeScanner" target before attempting to run the app.

Add the CameraView Class

Before we can display the camera preview to the user, we need to create a class to represent the camera view.

  • Select ViewController.swift in the Project Navigator.
  • Add the following line to the imports section:
    import AVFoundation
  • Add the following class declaration immediately before the ViewController class that was automatically generated by Xcode:
    class CameraView: UIView {
        override class var layerClass: AnyClass {
            get {
                return AVCaptureVideoPreviewLayer.self
            }
        }
    
        override var layer: AVCaptureVideoPreviewLayer {
            get {
                return super.layer as! AVCaptureVideoPreviewLayer
            }
        }
    }

This class extends UIView and overrides the layerClass property to specify that the view will be backed by an instance of AVCaptureVideoPreviewLayer. It also overrides the layer property to cast the return value to AVCaptureVideoPreviewLayer. This will make it easier to access the properties of the preview layer later.

Add the Camera View to the View Controller

Next, we'll add the camera view to the view controller.

  • In the ViewController class, declare a member variable to contain the camera view. Since we'll be creating the view instance programmatically, we don't need to tag it as an outlet:
    var cameraView: CameraView!
  • Override the loadView() method to initialize the view:
    override func loadView() {
        cameraView = CameraView()
    
        view = cameraView
    }

Although the camera view will now be visible when we run the app, it won't yet show anything but a black rectangle. We'll fix this in the next section.

Configure the Capture Session

In order to get the camera view to actually reflect what the camera is seeing, we need to connect it to an AV capture session. We'll use a dispatch queue to execute the more expensive session operations so the UI isn't blocked while waiting for them to complete.

  • Add member variables for the capture session and dispatch queue to ViewController:
    let session = AVCaptureSession()
    let sessionQueue = DispatchQueue(label: AVCaptureSession.self.description(), attributes: [], target: nil)
  • Add the AVCaptureMetadataOutputObjectsDelegate protocol to the view controller class:
    class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate { 
        ...
    } 
  • Add the following code to viewDidLoad() to initialize the capture session. For this example, we'll be configuring the session to recognize two barcode types – EAN-13 (aka "UPC") codes and QR codes:
    session.beginConfiguration()
    
    let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    
    if (videoDevice != nil) {
        let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice)
    
        if (videoDeviceInput != nil) {
            if (session.canAddInput(videoDeviceInput)) {
                session.addInput(videoDeviceInput)
            }
        }
    
        let metadataOutput = AVCaptureMetadataOutput()
    
        if (session.canAddOutput(metadataOutput)) {
            session.addOutput(metadataOutput)
    
            metadataOutput.metadataObjectTypes = [
                AVMetadataObjectTypeEAN13Code,
                AVMetadataObjectTypeQRCode
            ]
    
            metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        }
    }
    
    session.commitConfiguration()
    
    cameraView.layer.session = session
    cameraView.layer.videoGravity = AVLayerVideoGravityResizeAspectFill
  • Add the following additional code to viewDidLoad() to set the initial camera orientation:
    let videoOrientation: AVCaptureVideoOrientation
    switch UIApplication.shared.statusBarOrientation {
        case .portrait:
            videoOrientation = .portrait
    
        case .portraitUpsideDown:
            videoOrientation = .portraitUpsideDown
    
        case .landscapeLeft:
            videoOrientation = .landscapeLeft
    
        case .landscapeRight:
            videoOrientation = .landscapeRight
    
        default:
            videoOrientation = .portrait
    }
    
    cameraView.layer.connection.videoOrientation = videoOrientation

Add Camera Usage Description to Info.plist

Use of the camera in an iOS application requires the user's permission. In order for iOS to ask for permission, we need to provide a string explaining what the application plans to do with the camera.

  • Add the camera usage description to Info.plist:
        <key>NSCameraUsageDescription</key>
        <string>to scan barcodes</string>

The application still doesn't do much, but it will now at least prompt the user for permission to access the camera:

Start and Stop the Capture Session

In order for the application to actually display what the camera is seeing, we need to start the capture session. We'll do this when the view appears. We'll also stop the session when the view disappears.

  • Add the following methods to ViewController to start and stop session capture:
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
    
        sessionQueue.async {
            self.session.startRunning()
        }
    }
    
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
    
        sessionQueue.async {
            self.session.stopRunning()
        }
    }

While it isn't capable of scanning barcodes yet, the application will now at least correctly show the camera preview:

Handle Orientation Changes

Although it now displays the preview, the application doesn't yet respond to changes in orientation. Next, we'll add code to update the camera orientation when the device is rotated.

  • Add the following method to ViewController to update the preview orientation when the device orientation changes:
    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        super.viewWillTransition(to: size, with: coordinator)
    
        // Update camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
            case .portrait:
                videoOrientation = .portrait
    
            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown
    
            case .landscapeLeft:
                videoOrientation = .landscapeRight
    
            case .landscapeRight:
                videoOrientation = .landscapeLeft
    
            default:
                videoOrientation = .portrait
        }
    
        cameraView.layer.connection.videoOrientation = videoOrientation
    }

Now, when the device is rotated, the preview will reflect the correct orientation.

Capture Barcode Values

Finally, we're ready to add the code that actually captures barcode values. We'll do this using the captureOutput(_:didOutputMetadataObjects:from:) method of the AVCaptureMetadataOutputObjectsDelegate protocol.

  • Add the following method to ViewController:
    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
        if (metadataObjects.count > 0 && metadataObjects.first is AVMetadataMachineReadableCodeObject) {
            let scan = metadataObjects.first as! AVMetadataMachineReadableCodeObject
    
            let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)
    
            alertController.addAction(UIAlertAction(title: "OK", style: .default, handler:nil))
    
            present(alertController, animated: true, completion: nil)
        }
    }

When a barcode is recognized, the application will now extract the associated value and present it to the user in an alert view:

Summary

This tutorial demonstrated how to easily incorporate barcode scanning functionality into an iOS application using the AVFoundation framework. The complete source code for the example ViewController class should look something like the following:

import UIKit
import AVFoundation

class CameraView: UIView {
    override class var layerClass: AnyClass {
        get {
            return AVCaptureVideoPreviewLayer.self
        }
    }

    override var layer: AVCaptureVideoPreviewLayer {
        get {
            return super.layer as! AVCaptureVideoPreviewLayer
        }
    }
}

class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
    // Camera view
    var cameraView: CameraView!

    // AV capture session and dispatch queue
    let session = AVCaptureSession()
    let sessionQueue = DispatchQueue(label: AVCaptureSession.self.description(), attributes: [], target: nil)

    override func loadView() {
        cameraView = CameraView()

        view = cameraView
    }

    override func viewDidLoad() {
        super.viewDidLoad()

        session.beginConfiguration()

        let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

        if (videoDevice != nil) {
            let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice)

            if (videoDeviceInput != nil) {
                if (session.canAddInput(videoDeviceInput)) {
                    session.addInput(videoDeviceInput)
                }
            }

            let metadataOutput = AVCaptureMetadataOutput()

            if (session.canAddOutput(metadataOutput)) {
                session.addOutput(metadataOutput)

                metadataOutput.metadataObjectTypes = [
                    AVMetadataObjectTypeEAN13Code,
                    AVMetadataObjectTypeQRCode
                ]

                metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
            }
        }

        session.commitConfiguration()

        cameraView.layer.session = session
        cameraView.layer.videoGravity = AVLayerVideoGravityResizeAspectFill

        // Set initial camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIApplication.shared.statusBarOrientation {
            case .portrait:
                videoOrientation = .portrait

            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown

            case .landscapeLeft:
                videoOrientation = .landscapeLeft

            case .landscapeRight:
                videoOrientation = .landscapeRight

            default:
                videoOrientation = .portrait
        }

        cameraView.layer.connection.videoOrientation = videoOrientation
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        // Start AV capture session
        sessionQueue.async {
            self.session.startRunning()
        }
    }

    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)

        // Stop AV capture session
        sessionQueue.async {
            self.session.stopRunning()
        }
    }

    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        super.viewWillTransition(to: size, with: coordinator)

        // Update camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
            case .portrait:
                videoOrientation = .portrait

            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown

            case .landscapeLeft:
                videoOrientation = .landscapeRight

            case .landscapeRight:
                videoOrientation = .landscapeLeft

            default:
                videoOrientation = .portrait
        }

        cameraView.layer.connection.videoOrientation = videoOrientation
    }

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
        // Display barcode value
        if (metadataObjects.count > 0 && metadataObjects.first is AVMetadataMachineReadableCodeObject) {
            let scan = metadataObjects.first as! AVMetadataMachineReadableCodeObject

            let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)

            alertController.addAction(UIAlertAction(title: "OK", style: .default, handler:nil))

            present(alertController, animated: true, completion: nil)
        }
    }
}

Printing Continuous Content in iOS

I’ve recently been working on an application that needs to generate printed receipts, and I’ve been using AirPrint to handle the output. Overall, I’ve found that AirPrint works really well, and I’ve had little trouble incorporating it into my app.

However, one challenge I’ve run into is producing continuous content. AirPrint seems to be geared more towards paginated content, and it doesn’t appear to work particularly well with the roll-based print media typically found in receipt printers.

After struggling with this for the better part of a day, I finally came up with this solution:

class ContinuousPageRenderer : UIPrintPageRenderer, UIPrintInteractionControllerDelegate {
    let attributedText: NSAttributedString

    let margin: CGFloat = 72.0 * 0.125

    init(attributedText: NSAttributedString) {
        self.attributedText = attributedText

        super.init()

        let printFormatter = UISimpleTextPrintFormatter(attributedText: attributedText)

        printFormatter.perPageContentInsets = UIEdgeInsets(top: margin, left: margin, bottom: margin, right: margin)

        addPrintFormatter(printFormatter, startingAtPageAt: 0)
    }

    func printInteractionController(_ printInteractionController: UIPrintInteractionController, cutLengthFor paper: UIPrintPaper) -> CGFloat {
        let size = CGSize(width: paper.printableRect.width - margin * 2, height: 0)

        let boundingRect = attributedText.boundingRect(with: size, options: [
            .usesLineFragmentOrigin,
            .usesFontLeading
        ], context: nil)

        return boundingRect.height + margin * 2
    }
}

This class provides a renderer for producing continuous output based on the content of an attributed string. Internally, it uses an instance of UISimpleTextPrintFormatter to format the output. A 1/8″ border, represented by the margin constant, is established around the generated content.

The class conforms to the UIPrintInteractionControllerDelegate protocol and provides an implementation for the printInteractionController(_:cutLengthFor:) method, which, despite its somewhat misleading name, actually appears to control the length of the generated page. In this case, the cut length is determined by calculating the bounding rectangle of the attributed text using the current printable area minus the page margins.

In order to correctly calculate the page size, a instance of this class must be set both as the print page renderer and the delegate of the print interaction controller; for example:

// Generate attributed text
let attributedText = NSMutableAttributedString()

let text = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\n"
let attributes = [NSFontAttributeName: UIFont.systemFont(ofSize: 10)]

attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))

// Print attributed text
let continuousPageRenderer = ContinuousPageRenderer(attributedText: attributedText)

printInteractionController.printPageRenderer = continuousPageRenderer
printInteractionController.delegate = continuousPageRenderer

Without the printInteractionController(_:cutLengthFor:) method, AirPrint attempts to break the content up into pages that, on my system, appear to have the same aspect ratio as “US Letter” (8 1/2 x 11) stock:

However, with the delegate method, the page size is correctly determined based on the length of the content:

Anchor Views

Anchor views are a new feature in MarkupKit 2.6, which is now available for download. They are represented by instances of the LMAnchorView class, a layout view that optionally anchors subviews to one or more of its own edges. Although it is possible to achieve similar layouts using a combination of row, column, layer, and spacer views, anchor views can provide a simpler alternative in many cases.

Anchors are specified as a comma-separated list of edges to which the view will be anchored within its parent. For example, the following markup creates an anchor view containing four labels anchored to its top, left, right, and bottom edges. The labels will all be inset by 16 pixels (they are given a border to make their bounding rectangles visible):

<LMAnchorView layoutMargins="16">
    <UILabel text="Top" anchor="top"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
    <UILabel text="Left" anchor="left"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
    <UILabel text="Right" anchor="right"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
    <UILabel text="Bottom" anchor="bottom"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
</LMAnchorView>

Subviews may also be anchored to the leading and trailing edges of the parent view to support right-to-left locales; for example:

<LMAnchorView layoutMargins="16">
    <UILabel text="Leading" anchor="leading"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
    <UILabel text="Trailing" anchor="trailing"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
</LMAnchorView>

Additionally, subviews may be anchored to multiple edges for a given dimension. For example, the following markup creates an anchor view containing two labels, each of which will span the entire width of the anchor view:

<LMAnchorView layoutMargins="16">
    <UILabel text="Top" anchor="top, left, right" textAlignment="center"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
    <UILabel text="Bottom" anchor="bottom, left, right" textAlignment="center"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
</LMAnchorView>

If no anchor is specified for a given dimension, the subview will be centered within the anchor view for that dimension:

<LMAnchorView layoutMargins="16">
    <UILabel text="Center"
        layer.borderWidth="0.5"
        layer.borderColor="#ff0000"/>
</LMAnchorView>

For more information, see the project README.

JTemplate: Template-Driven REST Services for Java

JTemplate is an open-source implementation of the CTemplate templating system (aka "Mustache") for Java. It also provides a set of classes for implementing template-driven REST services in Java.

This article introduces the JTemplate framework and provides an overview of its key features.

Templates

Templates are documents that describe an output format such as HTML, XML, or CSV. They allow the ultimate representation of a data structure to be specified independently of the data itself, promoting a clear separation of responsibility.

The CTemplate system defines a set of "markers" that are replaced with values supplied by the data structure (which CTemplate calls a "data dictionary") when a template is processed. In JTemplate, the data dictionary is provided by an instance of java.util.Map whose entries represent the values supplied by the dictionary.

For example, the contents of the following map might represent the result of some simple statistical calculations:

{
    "count": 3, 
    "sum": 9.0,
    "average": 3.0
}

A template for transforming this data into HTML is shown below:

<html>
<head>
    <title>Statistics</title>
</head>
<body>
    <p>Count: {{count}}</p>
    <p>Sum: {{sum}}</p>
    <p>Average: {{average}}</p> 
</body>
</html>

At execution time, the "count", "sum", and "average" markers are replaced by their corresponding values from the data dictionary, producing the following markup:

<html>
<head>
    <title>Statistics</title>
</head>
<body>
    <p>Count: 3</p>
    <p>Sum: 9.0</p>
    <p>Average: 3.0</p> 
</body>
</html>

JTemplate provides the TemplateEncoder class for merging a template document with a data dictionary. Templates are applied using one of the following TemplateEncoder methods:

public void writeValue(Object value, OutputStream outputStream) { ... }
public void writeValue(Object value, OutputStream outputStream, Locale locale) { ... }
public void writeValue(Object value, Writer writer) { ... }
public void writeValue(Object value, Writer writer, Locale locale) { ... }

The first argument represents the value to write (i.e. the data dictionary), and the second the output destination. The optional third argument represents the locale for which the template will be applied. If unspecified, the default locale is used.

For example, the following code snippet applies a template named map.txt to the contents of a data dictionary whose values are specified by a hash map:

HashMap<String, Object> map = new HashMap<>();

map.put("a", "hello");
map.put("b", 123");
map.put("c", true);

TemplateEncoder encoder = new TemplateEncoder(getClass().getResource("map.txt"), "text/plain");

String result;
try (StringWriter writer = new StringWriter()) {
    encoder.writeValue(map, writer);

    result = writer.toString();
}

System.out.println(result);

If map.txt is defined as follows:

a = {{a}}, b = {{b}}, c = {{c}}

this code would produce the following output:

a = hello, b = 123, c = true

REST Services

In addition to template processing, JTemplate provides several classes for use in implementing template-driven REST services:

  • DispatcherServlet – abstract base class for REST services
  • RequestMethod – annotation that specifies the HTTP verb associated with a service method
  • ResponseMapping – annotation that associates a template with a method result
  • JSONEncoder – class used for encoding responses that are not associated with a template

DispatcherServlet is an abstract base class for REST services. Service operations are defined by adding public methods to a concrete service implementation.

Methods are invoked by submitting an HTTP request for a path associated with a servlet instance. Arguments are provided either via the query string or in the request body, like an HTML form. DispatcherServlet converts the request parameters to the expected argument types, invokes the method, and writes the return value to the response stream.

The RequestMethod annotation is used to associate a service method with an HTTP verb such as GET or POST. The optional ResponseMapping annotation associates a template document with a method result. If specified, TemplateEncoder is used to apply the template to the return value to produce the final response. Otherwise, the return value is automatically serialized as JSON using the JSONEncoder class.

For example, the following class might be used to implement a service that performs the simple statistical calculations discussed in the previous section:

@WebServlet(urlPatterns={"/statistics/*"}, loadOnStartup=1)
public class StatisticsServlet extends DispatcherServlet {
    private static final long serialVersionUID = 0;

    @RequestMethod("GET")
    @ResponseMapping(name="statistics~html.txt", mimeType="text/html")
    public Map<String, ?> getStatistics(List<Double> values) {
        int count = values.size();

        double sum = 0;

        for (int i = 0; i < count; i++) {
            sum += values.get(i);
        }

        double average = sum / count;

        return mapOf(
            entry("count", count),
            entry("sum", sum),
            entry("average", average)
        );
    }
}

A specific representation is requested by appending a tilde ("~") character to the service URL, followed by a file extension representing the desired document type. The MIME type associated with the extension is used to identify the template to apply.

For example, a GET for the following URL would return the default JSON response:

/statistics?values=1&values=3&values=5

However, a GET for this URL would return an HTML document produced by applying the template defined in statistics~html.txt to the result:

/statistics/~html?values=1&values=3&values=5

Additional Information

This article introduced the JTemplate framework and provided an overview of its key features.

The latest JTemplate release can be downloaded here. For more information, see the project README.

HTTP-RPC: A Lightweight Multi-Platform REST Client Framework

HTTP-RPC is an open-source framework for simplifying development of REST applications. It allows developers to access REST-based web services using a convenient, RPC-like metaphor while preserving fundamental REST principles such as statelessness and uniform resource access.

The project currently includes support for consuming web services in Objective-C/Swift, Java (including Android), and JavaScript. It provides a consistent, callback-based API that makes it easy to interact with services regardless of target device or operating system.

This article introduces the HTTP-RPC framework and provides an overview of some of its key features.

Service Operations

Services are accessed by applying an HTTP verb such as GET or POST to a target resource. The target is specified by a path representing the name of the resource, and is generally expressed as a noun such as /calendar or /contacts.

Arguments are provided either via the query string or in the request body, like an HTML form. Although services may produce any type of content, results are generally returned as JSON. Operations that do not return a value are also supported.

For example, the following request might retrieve the sum of two numbers, whose values are specified by the a and b query arguments:

GET /math/sum?a=2&b=4

Alternatively, the argument values could be specified as a list rather than as two fixed variables:

GET /math/sum?values=1&values=2&values=3

In either case, the service would return the value 6 in response.

Client Implementations

The project currently supports consuming services in Objective-C/Swift, Java, and JavaScript. The iOS client is distributed as a universal framework that is less than 500KB in size. The Java client is distributed as a JAR file that is only 17KB in size and has no external dependencies. The JavaScript client is distributed as a single JavaScript source file that is less than 3KB and also has no dependencies.

The following examples demonstrate how the various client libraries can be used to invoke the operations of the hypothetical math service discussed in the previous section. Each example creates an instance of a platform-specific service proxy, then executes the service requests by specifying the HTTP method, resource path, method arguments, and a result handler that will be invoked on completion of the method. Note that the static mapOf() and entry() methods used in the Java example are provided by the WebServiceProxy class to help simplify argument map creation:

Swift

// Create service proxy
let serviceProxy = WSWebServiceProxy(session: URLSession.shared, serverURL: URL(string: "https://localhost:8443")!)

// Get sum of "a" and "b"
serviceProxy.invoke("GET", path: "/math/sum", arguments: ["a": 2, "b": 4]) {(result, error) in
    // result is 6
}

// Get sum of all values
serviceProxy.invoke("GET", path: "/math/sum", arguments: ["values": [1, 2, 3, 4]]) {(result, error) in
    // result is 6
}

Java

// Create service proxy
WebServiceProxy serviceProxy = new WebServiceProxy(new URL("https://localhost:8443"), Executors.newFixedThreadPool(10));

// Get sum of "a" and "b"
serviceProxy.invoke("GET", "/math/sum", mapOf(entry("a", 2), entry("b", 4)), (result, exception) -> {
    // result is 6
});

// Get sum of all values
serviceProxy.invoke("GET", "/math/sum", mapOf(entry("values", listOf(1, 2, 3))), (result, exception) -> {
    // result is 6
});

JavaScript

// Create service proxy
var serviceProxy = new WebServiceProxy();

// Get sum of "a" and "b"
serviceProxy.invoke("GET", "/math/sum", {a:4, b:2}, function(result, error) {
    // result is 6
});

// Get sum of all values
serviceProxy.invoke("GET", "/math/sum", {values:[1, 2, 3, 4]}, function(result, error) {
    // result is 6
});

Although the examples are written in three different programming languages, they are all structurally similar and demonstrate identical behavior.

More Information

This article introduced the HTTP-RPC framework and provided an overview of some of its key features. The latest HTTP-RPC release can be downloaded here. For more information, see the project README.

MarkupKit: Declarative UI for iOS

MarkupKit is an open-source framework for simplifying development of native iOS applications. It allows developers to construct user interfaces declaratively using a human-readable markup language rather than visually using Interface Builder, similar to how applications are built for Android and .NET.

For example, the following markup creates an instance of UILabel and sets the value of its text property to "Hello, World!":

<UILabel text="Hello, World!"/>

The output produced by this markup is identical to the output of the following Swift code:

let label = UILabel()
label.text = "Hello, World!"

Building an interface in markup can significantly reduce development time. For example, the periodic table shown below was constructed using a combination of MarkupKit-provided layout views and UILabel instances:

Creating this view in Interface Builder would be an arduous task. Creating it programmatically would be even more difficult. However, in markup it is almost trivial. The complete source code for this example can be found here.

Using markup also helps to promote a clear separation of responsibility. Most, if not all, aspects of a view's presentation can be specified in the view declaration, leaving the controller responsible solely for managing the view's behavior.

This document introduces the MarkupKit framework and provides an overview of some of its key features, including property templates, outlets and actions, localization, and auto layout.

Document Structure

MarkupKit uses XML to define the structure of a user interface. The hierarchical nature of an XML document parallels the view hierarchy of an iOS application, making it easy to understand the relationships between views.

Elements

Elements in a MarkupKit document typically represent instances of UIView or its subclasses. As elements are read by the XML parser, the corresponding class instances are dynamically created and added to the view hierarchy.

For example, the following markup declares an instance of LMColumnView containing a UIImageView and a UILabel. LMColumnView is a MarkupKit-provided subclass of UIView that automatically arranges its subviews in a vertical line:

<LMColumnView>
    <UIImageView image="world.png" contentMode="center"/>
    <UILabel text="Hello, World!" textAlignment="center"/>
</LMColumnView>

Elements may not always represent view instances, however. For example, this markup creates an instance of UISegmentedControl, the content of which is defined by a collection of "segment" tags:

<UISegmentedControl>
    <segment title="Small"/>
    <segment title="Medium"/>
    <segment title="Large"/>
    <segment title="Extra-Large"/>
</UISegmentedControl>

Attributes

Attributes in a MarkupKit document typically represent view properties. For example, the following markup declares an instance of a system-style UIButton and sets its title property to "Press Me!":

<UIButton style="systemButton" title="Press Me!"/>

Property values are set using key-value coding (KVC). Type conversions for string, number, and boolean properties are handled automatically by KVC. Other types, such as colors, fonts, images, and enumerations, are handled specifically by MarkupKit.

For example, the following markup creates a label whose font is set to 24-point Helvetica and whose text color is set to "#ff0000", or bright red:

<UILabel text="A Red Label" font="Helvetica 24" textColor="#ff0000"/>

A few attributes have special meaning in MarkupKit and do not represent properties. These include "style", "class", and "id". Their respective purposes are explained in more detail later.

Additionally, attributes whose names begin with "on" represent control events, or "actions". The values of these attributes represent the handler methods that are triggered when their associated events are fired. For example, this markup creates a button with an associated action that will be triggered when the button is pressed:

<UIButton style="systemButton" title="Press Me!" onPrimaryActionTriggered="buttonPressed"/>

Actions are also discussed in more detail below.

Property Templates

Often, when constructing a user interface, the same set of property values are applied repeatedly to instances of a given type. For example, an application designer may want all buttons to have a similar appearance. While it is possible to simply duplicate the property definitions across each button instance, this is repetitive and does not allow the design to be easily modified later – every instance must be located and modified individually, which can be time consuming and error prone.

MarkupKit allows developers to abstract common sets of property definitions into CSS-like "property templates", which can then be applied by name to individual view instances. This makes it much easier to assign common property values as well as modify them later.

Property templates are specified using JavaScript Object Notation (JSON), and may be either external or inline. Inline templates are defined within the markup document itself, and external templates are specified in a separate file.

For example, the following JSON document defines a template named "greeting", which contains definitions for "font" and "textAlignment" properties:

{
  "greeting": {
    "font": "Helvetica 24", 
    "textAlignment": "center"
  }
}

Templates are added to a MarkupKit document using the properties processing instruction (PI). The following PI adds all properties defined by MyStyles.json to the current document:

<?properties MyStyles?>

Inline templates simply embed the entire template definition within the processing instruction:

<?properties {
  "greeting": {
    "font": "Helvetica 24", 
    "textAlignment": "center"
  }
}?>

Templates are applied to view instances using the reserved "class" attribute. The value of this attribute refers to the name of a template defined within the current document. All property values defined by the template are applied to the view. Nested properties, such as "titleLabel.font", are supported.

For example, given the preceding template definition, the following markup would produce a label reading "Hello, World!" in 24-point Helvetica with horizontally centered text:

<UILabel class="greeting" text="Hello, World!"/>

Multiple templates can be applied to a view using a comma-separated list of template names; for example:

<UILabel class="bold, red" text="Bold Red Label"/>

Outlets

The reserved "id" attribute can be used to assign a name to a view instance. This creates an "outlet" for the view that makes it accessible to calling code. Using KVC, MarkupKit "injects" the named view instance into the document's owner (generally either the view controller for the root view or the root view itself), allowing the application to interact with it.

For example, the following markup declares an instance of UITextField and assigns it an ID of "textField":

<UITextField id="textField"/>

The owning class might declare an outlet for the text field in Objective-C like this:

@property (nonatomic) IBOutlet UITextField *textField;

or in Swift, like this:

@IBOutlet var textField: UITextField!

In either case, when the document is loaded, the outlet will be populated with the text field instance, and the application can interact with it just as if it was defined in a storyboard or created programmatically.

Actions

Most non-trivial applications need to respond in some way to user interaction. UIKit controls (subclasses of the UIControl class) fire events that notify an application when such interaction has occurred. For example, the UIButton class fires the UIControlEventPrimaryActionTriggered event when a button instance is tapped.

While it would be possible for an application to register for events programmatically using outlets, MarkupKit provides a more convenient alternative. Any attribute whose name begins with "on" (but does not refer to a property) is considered a control event. The value of the attribute represents the name of the action that will be triggered when the event is fired.

For example, the following markup declares an instance of UIButton that calls the buttonPressed: method of the document's owner when the button is tapped:

<UIButton style="systemButton" title="Press Me!" onPrimaryActionTriggered="buttonPressed:"/>

For example:

@IBAction func buttonPressed(_ sender: UIButton) {
    // Handle button press
}

Localization

If an attribute's value begins with "@", MarkupKit attempts to look up a localized version of the value before setting the property.

For example, if an application has defined a localized greeting in Localizable.strings as follows:

"hello" = "Hello, World!";

the following markup will produce an instance of UILabel with the value of its text property set to "Hello, World!":

<UILabel text="@hello"/>

If a localized value is not found, the key will be used instead. This allows developers to easily identify missing string resources at runtime.

MarkupKit Classes

MarkupKit includes a number of classes to help simplify application development. Some of the most common are discussed below.

LMViewBuilder

LMViewBuilder is the class that is actually responsible for loading a MarkupKit document. It provides the following class method, which, given a document name, owner, and optional root view, deserializes a view hierarchy from markup:

+ (UIView *)viewWithName:(NSString *)name owner:(nullable id)owner root:(nullable UIView *)root;

The name parameter represents the name of the view to load. It is the file name of the XML document containing the view declaration, minus the .xml extension.

The owner parameter represents the view's owner. It is often an instance of UIViewController, but this is not strictly required. For example, custom table and collection view cell classes often specify themselves as the owner.

The root parameter represents the value that will be used as the root view instance when the document is loaded. This value is often nil, meaning that the root view will be specified by the document itself. However, when non-nil, it means that the root view is being provided by the caller. In this case, the reserved <root> tag can be used as the document's root element to refer to this view.

For example, a view controller that is defined by a storyboard already has an established view instance when viewDidLoad is called. The controller can pass itself as the view's owner and the value of its view property as the root argument. This allows the navigational structure of the application (i.e. segues) to be defined in a storyboard, but the content of individual views to be defined in markup.

Layout Views

Auto layout is an iOS feature that allows developers to create applications that automatically adapt to device size, orientation, or content changes. An application built using auto layout generally has little or no hard-coded view positioning logic, but instead dynamically arranges user interface elements based on their preferred or "intrinsic" content sizes.

Auto layout in iOS is implemented primarily via layout constraints, which, while powerful, are not particularly convenient to work with. To simplify the process, MarkupKit provides the following set of view classes, whose sole responsibility is managing the size and position of their respective subviews:

  • LMRowView – arranges subviews in a horizontal line
  • LMColumnView – arranges subviews in a vertical line
  • LMLayerView – arranges subviews in layers, like a stack of transparencies

These classes use layout constraints internally, allowing developers to easily take advantage of auto layout while eliminating the need to manage constraints directly.

LMRowView

The LMRowView class arranges its subviews in a horizontal line. Subviews are laid out from leading to trailing edge in the order in which they are declared. For example, the following markup creates a row view containing three labels:

<LMRowView layoutMargins="12">
    <UILabel text="One"/>
    <UILabel text="Two"/>
    <UILabel text="Three"/>
    <LMSpacer/>
</LMRowView>

The "layoutMargins" attribute establishes a 12-pixel wide gap around the row view's border, and the trailing spacer view ensures that the labels are left-aligned within the row (or right-aligned in locales that use right-to-left text):

Spacer views are discussed in more detail later.

Baseline Alignment

Subviews can be baseline-aligned within a row using the alignToBaseline property. For example, this markup creates a row view containing three labels, all with different font sizes:

<LMRowView alignToBaseline="true" layoutMargins="12">
    <UILabel text="Ten" font="Helvetica 12"/>
    <UILabel text="Twenty" font="Helvetica 24"/>
    <UILabel text="Thirty" font="Helvetica 48"/>
    <LMSpacer/>
</LMRowView>

Because alignToBaseline is set to true, the baselines of all three labels will line up:

LMColumnView

The LMColumnView class arranges its subviews in a vertical line. Subviews are laid out from top to bottom in the order in which they are declared. For example, the following markup creates a column view containing three text fields:

<LMColumnView layoutMargins="12">
    <UITextField placeholder="First" borderStyle="roundedRect"/>
    <UITextField placeholder="Second" borderStyle="roundedRect"/>
    <UITextField placeholder="Third" borderStyle="roundedRect"/>
    <LMSpacer/>
</LMColumnView>

The left and right edges of each subview are automatically pinned to the left and right edges of the column view, ensuring that all of the text fields are the same width:

Grid Alignment

Nested subviews of a column view can be vertically aligned in a spreadsheet-like grid using the alignToGrid property. When this property is set to true, cells in contiguous rows will be resized to match the width of the widest cell in the column.

For example, the following markup would produce a grid containing three rows and two columns:

<LMColumnView alignToGrid="true" layoutMargins="12">
    <LMRowView>
        <UILabel text="One"/>
        <UITextField weight="1" placeholder="First" borderStyle="roundedRect"/>
    </LMRowView>

    <LMRowView>
        <UILabel text="Two"/>
        <UITextField weight="1" placeholder="Second" borderStyle="roundedRect"/>
    </LMRowView>

    <LMRowView>
        <UILabel text="Three"/>
        <UITextField weight="1" placeholder="Third" borderStyle="roundedRect"/>
    </LMRowView>
</LMColumnView>

The weight values ensure that the text fields are allocated all of the remaining space within each row after the size of label has been determined:

Weights are discussed in more detail below.

View Weights

Often, a row or column view will be given more space than it needs to accommodate the intrinsic sizes of its subviews. MarkupKit adds a weight property to UIView that is used to determine how the extra space should be allocated. Weight is a numeric value that specifies the amount of excess space the view would like to be given within its superview (once the sizes of all unweighted views have been determined) and is relative to all other weights specified within the superview.

For row views, weight applies to the excess horizontal space, and for column views to the excess vertical space. For example, since both labels in the following example have a weight of 0.5, they will each be allocated 50% of the width of the row view. The labels are given a border to make their bounds more obvious:

<LMRowView layoutMargins="12">
    <UILabel weight="0.5" text="50%" textAlignment="center"
        layer.borderWidth="0.5" layer.borderColor="#ff6666"/>
    <UILabel weight="0.5" text="50%" textAlignment="center"
        layer.borderWidth="0.5" layer.borderColor="#ff6666"/>
</LMRowView>

In this example, the first label will be given one-sixth of the available space, the second one-third (2/6), and the third one-half (3/6):

<LMColumnView layoutMargins="12">
    <UILabel weight="1" text="1/6" textAlignment="center"
        layer.borderWidth="0.5" layer.borderColor="#ff6666"/>
    <UILabel weight="2" text="1/3" textAlignment="center"
        layer.borderWidth="0.5" layer.borderColor="#ff6666"/>
    <UILabel weight="3" text="1/2" textAlignment="center"
        layer.borderWidth="0.5" layer.borderColor="#ff6666"/>
</LMColumnView>

Spacer Views

A common use for weights is to create flexible space around a view. For example, the following markup will center a label horizontally within a row:

<LMRowViewn layoutMargins="12">
    <UIView weight="1"/>
    <UILabel text="Hello, World!"/>
    <UIView weight="1"/>
</LMRowView>

Because such "spacer" views are so common, MarkupKit provides a dedicated UIView subclass called LMSpacer for conveniently creating flexible space between other views. LMSpacer has a default weight of 1, so the previous example could be rewritten as follows, eliminating the "weight" attribute and improving readability:

<LMRowView layoutMargins="12">
    <LMSpacer/>
    <UILabel text="Hello, World!"/>
    <LMSpacer/>
</LMRowView>

Layer Views

The LMLayerView class simply arranges its subviews in layers, like a stack of transparencies. The subviews are all automatically sized to fill the layer view.

For example, the following markup creates a layer view containing an image view and a label:

<LMLayerView>
    <UIImageView image="world.png" contentMode="center"/>
    <UILabel text="Hello, World!" textColor="#ffffff" textAlignment="center"/>
</LMLayerView>

Since it is declared first, the contents of the image view will appear beneath the label text:

More Information

This document introduced the MarkupKit framework and provided an overview of some of its key features.

The latest MarkupKit release can be downloaded here. It is also available via CocoaPods. For more information, see the project README.