Named Parameters in JDBC Queries

Prepared statements are a common way to execute parameterized queries in JDBC. For example, the following SQL might be used to retrieve a list of all users whose first or last name matches a particular character sequence:

SELECT * FROM user WHERE first_name LIKE ? or last_name LIKE ?

Parameter values are supplied at runtime via indexed setter methods defined by the PreparedStatement class:

statement.setString(1, pattern);
statement.setString(2, pattern);

This works fine for simple queries, but it becomes increasingly difficult to manage as the number of parameters grows. It is also redundant – although this query only requires a single argument, two parameter values must be supplied.

The Java Persistence API (JPA) provides a more convenient alternative using named parameters. For example, the above query might be written as follows in JPQL:

SELECT u FROM User u WHERE u.firstName LIKE :pattern or u.lastName LIKE :pattern

This is more readable and less verbose, as the caller only needs to provide the value of the "pattern" parameter once. It is also more resilient to changes, as the arguments are not dependent on ordinal position. Unfortunately, it requires a JPA-compliant object-relational mapping (ORM) framework such as Hibernate, a dependency that may not be satisfiable in all situations.

The org.jtemplate.sql.Parameters class provided by the JTemplate framework brings named parameter support to JDBC. The parse() method of this class is used to create a Parameters instance from a JPA-like SQL query; for example:

SELECT * FROM user WHERE first_name LIKE :pattern or last_name LIKE :pattern

It takes a string or reader containing the query text as an argument:

Parameters parameters = Parameters.parse(sqlReader);

The getSQL() method of the Parameters class returns the processed query in standard JDBC syntax. This value can be used in a call to Connection#prepareStatement():

PreparedStatement statement = connection.prepareStatement(parameters.getSQL());

Parameter values are specified via the put() method:

parameters.put("pattern", pattern);

The values are applied to the statement via the apply() method:

parameters.apply(statement);

Once applied, the query can be executed:

ResultSet resultSet = statement.executeQuery();

Note that Parameters is not limited to queries; it can also be used for updates.

A complete example using the Parameters class can be found here. It is a simple REST service that allows a caller to search a database of pets by owner name. The source code for the class itself is available here.

For more ways to simplify Java development, please see my projects on GitHub:

  • HTTP-RPC – Lightweight multi-platform REST
  • JTemplate – Data-driven presentation templates for Java

JTemplate: Data-Driven Presentation Templates for Java

Templates are documents that describe an output format such as HTML, XML, or CSV. They allow the ultimate representation of a data structure to be specified independently of the data itself, promoting a clear separation of responsibility.

The CTemplate system defines a set of "markers" that are replaced with values supplied by the data structure (which CTemplate calls a "data dictionary") when a template is processed. In JTemplate, the data dictionary is provided by an instance of java.util.Map whose entries represent the values supplied by the dictionary.

For example, the contents of the following map might represent the result of some simple statistical calculations:

{
    "count": 3, 
    "sum": 9.0,
    "average": 3.0
}

A template for transforming this data into HTML is shown below:

<html>
<head>
    <title>Statistics</title>
</head>
<body>
    <p>Count: {{count}}</p>
    <p>Sum: {{sum}}</p>
    <p>Average: {{average}}</p> 
</body>
</html>

At execution time, the "count", "sum", and "average" markers are replaced by their corresponding values from the data dictionary, producing the following markup:

<html>
<head>
    <title>Statistics</title>
</head>
<body>
    <p>Count: 3</p>
    <p>Sum: 9.0</p>
    <p>Average: 3.0</p> 
</body>
</html>

TemplateEncoder Class

JTemplate provides the TemplateEncoder class for merging a template document with a data dictionary. Templates are applied using one of the following TemplateEncoder methods:

public void writeValue(Object value, OutputStream outputStream) { ... }
public void writeValue(Object value, OutputStream outputStream, Locale locale) { ... }
public void writeValue(Object value, Writer writer) { ... }
public void writeValue(Object value, Writer writer, Locale locale) { ... }

The first argument represents the value to write (i.e. the data dictionary), and the second the output destination. The optional third argument represents the locale for which the template will be applied. If unspecified, the default locale is used.

For example, the following code snippet applies a template named map.txt to the contents of a data dictionary whose values are specified by a hash map:

HashMap<String, Object> map = new HashMap<>();

map.put("a", "hello");
map.put("b", 123");
map.put("c", true);

TemplateEncoder encoder = new TemplateEncoder(getClass().getResource("map.txt"), "text/plain");

String result;
try (StringWriter writer = new StringWriter()) {
    encoder.writeValue(map, writer);

    result = writer.toString();
}

System.out.println(result);

If map.txt is defined as follows:

a = {{a}}, b = {{b}}, c = {{c}}

this code would produce the following output:

a = hello, b = 123, c = true

Additional Information

This article introduced the JTemplate framework and provided a brief overview of its key features.

The latest JTemplate release can be downloaded here. For more information, see the project README.

Creating a Simple Android REST Client Using HTTP-RPC

HTTP-RPC is an open-source framework for simplifying development of REST applications. It allows developers to access REST-based web services using a convenient, RPC-like metaphor while preserving fundamental REST principles such as statelessness and uniform resource access.

The project currently includes support for consuming web services in Objective-C/Swift, Java, and JavaScript. It provides a consistent client API that makes it easy to interact with services regardless of target device or operating system.

This article provides a demonstration of how the HTTP-RPC Java client library can be used to invoke services provided by JSONPlaceholder, a "fake online REST API", in Android.

Service API

JSONPlaceholder offers a collection of web services that simulate common REST operations on a variety of resource types such as "albums", "photos", and "users". For example, the following URL retrieves a JSON document containing a list of simulated user records:

https://jsonplaceholder.typicode.com/users

The document is similar to the following:

[
  {
    "id": 1,
    "name": "Leanne Graham",
    "username": "Bret",
    "email": "Sincere@april.biz",
    "address": {
      "street": "Kulas Light",
      "suite": "Apt. 556",
      "city": "Gwenborough",
      "zipcode": "92998-3874",
      "geo": {
        "lat": "-37.3159",
        "lng": "81.1496"
      }
    },
    "phone": "1-770-736-8031 x56442",
    "website": "hildegard.org",
    "company": {
      "name": "Romaguera-Crona",
      "catchPhrase": "Multi-layered client-server neural-net",
      "bs": "harness real-time e-markets"
    }
  },
  ...
]

Additionally, the service provides a collection of simulated discussion posts, which can be retrieved on a per-user basis as follows:

https://jsonplaceholder.typicode.com/posts?userId=1

For example:

[
  {
    "userId": 1,
    "id": 1,
    "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
    "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
  },
  {
    "userId": 1,
    "id": 2,
    "title": "qui est esse",
    "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
  },
  ...
]

Sample Application

The sample application presents two views. The first one displays a list of users:

The second displays a list of posts by the selected user:

WebServiceProxy Class

The WebServiceProxy class is used to invoke service operations. Internally, it uses an instance of HttpURLConnection to send and receive data. Response data is deserialized automatically into appropriate Java types including String, Number, Boolean, List, and Map.

Service operations are initiated by calling the invoke() method, which takes the following arguments:

  • method – the HTTP method to execute
  • path – the resource path
  • arguments – a map containing the request arguments as key/value pairs
  • resultHandler – an instance of ResultHandler that will be invoked upon completion of the service operation

A convenience method is also provided for executing operations that don't take any arguments.

Arguments are passed to the service either via the query string or in the request body, like an HTML form. List arguments represent multi-value parameters and are handled similarly to <select multiple> tags in HTML.

The result handler is a callback that is invoked upon completion of the request. If successful, the first argument will contain the value returned by the server. Otherwise, the first argument will be null, and the second will contain an exception representing the error that occurred.

For example, the following code might be used to invoke an operation that returns the sum of two numbers, specified by the "a" and "b" arguments. The static mapOf() and entry() methods are provided by the WebServiceProxy class to simplify argument map creation:

// Get sum of "a" and "b"
serviceProxy.invoke("GET", "/math/sum", mapOf(entry("a", 2), entry("b", 4)), (result, exception) -> {
    // result is 6
});

ExampleApplication Class

An instance of WebServiceProxy is created by the example application at startup:

private static WebServiceProxy serviceProxy;

public static WebServiceProxy getServiceProxy() {
    return serviceProxy;
}

@Override
public void onCreate() {
    super.onCreate();

    URL serverURL;
    try {
        serverURL = new URL("https://jsonplaceholder.typicode.com");
    } catch (MalformedURLException exception) {
        throw new RuntimeException(exception);
    }

    serviceProxy = new WebServiceProxy(serverURL, Executors.newSingleThreadExecutor()) {
        private Handler handler = new Handler(Looper.getMainLooper());

        @Override
        protected void dispatchResult(Runnable command) {
            handler.post(command);
        }
    };
}

The user and post activities discussed below use this proxy to invoke their respective service operations.

UserActivity Class

The UserActivity class is responsible for presenting the list of users returned from /users. Internally, it maintains a collection of objects representing the user list. In onResume(), if the list has not already been loaded, it uses the service proxy to retrieve the user list from the server:

public class UserActivity extends AppCompatActivity {
    private ListView userListView;

    private List<Map<String, ?>> userList = null;

    private BaseAdapter userListAdapter = new BaseAdapter() {
        ...
    };

    private static String TAG = UserActivity.class.getName();

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

        userListView = (ListView)findViewById(R.id.user_list_view);

        userListView.setOnItemClickListener((parent, view, position, id) -> {
            Intent intent = new Intent(UserActivity.this, PostActivity.class);

            intent.putExtra(PostActivity.USER_ID_KEY, id);

            startActivity(intent);
        });
    }

    @Override
    protected void onResume() {
        super.onResume();

        if (userList == null) {
            ExampleApplication.getServiceProxy().invoke("GET", "/users",
                (List<Map<String, ?>> result, Exception exception) -> {
                if (exception == null) {
                    userList = result;

                    userListView.setAdapter(userListAdapter);
                } else {
                    Log.e(TAG, exception.getMessage());
                }
            });
        }
    }
}

A list adapter is used to present the user details for each row:

private BaseAdapter userListAdapter = new BaseAdapter() {
    @Override
    public int getCount() {
        return userList.size();
    }

    @Override
    public Object getItem(int position) {
        return userList.get(position);
    }

    @Override
    public long getItemId(int position) {
        return ((Number)userList.get(position).get("id")).longValue();
    }

    @Override
    public View getView(int position, View convertView, ViewGroup parent) {
        if (convertView == null) {
            convertView = getLayoutInflater().inflate(R.layout.item_user, null);
        }

        Map<String, ?> note = userList.get(position);

        String name = (String)note.get("name");

        TextView nameTextView = (TextView)convertView.findViewById(R.id.name_text_view);
        nameTextView.setText(name);

        String email = (String)note.get("email");

        TextView emailTextView = (TextView)convertView.findViewById(R.id.email_text_view);
        emailTextView.setText(email);

        return convertView;
    }
};

PostActivity Class

When a user is selected, an instance of PostActivity is presented. This class is responsible for presenting the list of user posts returned from /posts. Like UserActivity, it maintains a collection of objects representing the server response, and populates the list in onResume():

public class PostActivity extends AppCompatActivity {
    private ListView postListView;

    private List<Map<String, ?>> postList = null;

    private BaseAdapter postListAdapter = new BaseAdapter() {
        ...
    };

    public static final String USER_ID_KEY = "userID";

    private static String TAG = PostActivity.class.getName();

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_post);

        postListView = (ListView)findViewById(R.id.post_list_view);
    }

    @Override
    protected void onResume() {
        super.onResume();

        if (postList == null) {
            long userID = getIntent().getLongExtra(USER_ID_KEY, 0);

            ExampleApplication.getServiceProxy().invoke("GET", "/posts", mapOf(entry("userId", userID)),
                (List<Map<String, ?>> result, Exception exception) -> {
                if (exception == null) {
                    postList = result;

                    postListView.setAdapter(postListAdapter);
                } else {
                    Log.e(TAG, exception.getMessage());
                }
            });
        }
    }
}

Again, a list adapter is used to present the details for each row:

private BaseAdapter postListAdapter = new BaseAdapter() {
    @Override
    public int getCount() {
        return postList.size();
    }

    @Override
    public Object getItem(int position) {
        return postList.get(position);
    }

    @Override
    public long getItemId(int position) {
        return ((Number)postList.get(position).get("id")).longValue();
    }

    @Override
    public View getView(int position, View convertView, ViewGroup parent) {
        if (convertView == null) {
            convertView = getLayoutInflater().inflate(R.layout.item_post, null);
        }

        Map<String, ?> note = postList.get(position);

        String title = (String)note.get("title");

        TextView titleTextView = (TextView)convertView.findViewById(R.id.title_text_view);
        titleTextView.setText(title);

        String body = (String)note.get("body");

        TextView bodyTextView = (TextView)convertView.findViewById(R.id.body_text_view);
        bodyTextView.setText(body);

        return convertView;
    }
};

More Information

This article provided a demonstration of how the HTTP-RPC Java client library can be used to build a simple Android REST client application. The complete source code for the sample application can be found here.

The latest version of HTTP-RPC can be downloaded here. For more information, see the project README.

Simplifying Auto Layout in iOS

Auto layout is an iOS feature that allows developers to create applications that automatically adapt to device size, orientation, or content changes. An application built using auto layout generally has little or no hard-coded view positioning logic, but instead dynamically arranges user interface elements based on their preferred or "intrinsic" content sizes.

Auto layout in iOS is implemented primarily via layout constraints, which, while powerful, are not particularly convenient to work with. This article provides an overview of how constraints are typically managed in an iOS application, and then discusses some alternatives that can significantly simplify the task of working with auto layout.

Storyboards

The structure of an iOS user interface is commonly represented by XIB files or storyboards created using Xcode's Interface Builder utility. This tool allows developers to lay out an application's user interface visually using drag/drop and other interactive features.

For example, the following is a storyboard representing a simple view. The view contains two subviews whose positions will be automatically determined at runtime based on layout constraints. The constraints pin the subviews to the edges of the parent view as well as to each other, with a 16-pixel gap in between:

Running the application in portrait mode on an iPhone 7 produces the following results:

In lanscape mode, the application looks like this:

While storyboards are undoubtedly the most common way to define auto layout constraints, they are not necessarily the most efficient. Using Interface Builder to visually establish every relationship can be awkward, especially when working with large or complex view hierarchies. Further, storyboards are not stored in a human-readable text format, which makes it difficult to identify changes across revisions. Finally, although controller logic can be shared between projects, iOS storyboards cannot be used in a tvOS application. Separate storyboards must be created for each platform, resulting in a potentially significant duplication of effort.

Programmatically Defined Constraints

In addition to storyboards, constraints can also be managed programmatically. For example, the following Swift code produces the same results as the previous example:

func createConstraintBasedView() {
    // Red view
    let redView = UIView()

    redView.translatesAutoresizingMaskIntoConstraints = false
    redView.backgroundColor = UIColor(red: 202.0 / 255.0, green: 53.0 / 255.0, blue: 56.0 / 255.0, alpha: 1.0)

    view.addSubview(redView)

    // Blue view
    let blueView = UIView()

    blueView.translatesAutoresizingMaskIntoConstraints = false
    blueView.backgroundColor = UIColor(red: 59.0 / 255.0, green: 85.0 / 255.0, blue: 162.0 / 255.0, alpha: 1.0)

    view.addSubview(blueView)

    // Constraints
    NSLayoutConstraint.activate([
        NSLayoutConstraint(item: redView, attribute: .top, relatedBy: .equal,
            toItem: topLayoutGuide, attribute: .bottom,
            multiplier: 1.0, constant: 0.0),
        NSLayoutConstraint(item: redView, attribute: .bottom, relatedBy: .equal,
            toItem: bottomLayoutGuide, attribute: .top,
            multiplier: 1.0, constant: 0.0),
        NSLayoutConstraint(item: redView, attribute: .leading, relatedBy: .equal,
            toItem: view, attribute: .leadingMargin,
            multiplier: 1.0, constant: 0.0),

        NSLayoutConstraint(item: blueView, attribute: .top, relatedBy: .equal,
            toItem: topLayoutGuide, attribute: .bottom,
            multiplier: 1.0, constant: 0.0),
        NSLayoutConstraint(item: blueView, attribute: .bottom, relatedBy: .equal,
            toItem: bottomLayoutGuide, attribute: .top,
            multiplier: 1.0, constant: 0.0),
        NSLayoutConstraint(item: blueView, attribute: .leading, relatedBy: .equal,
            toItem: redView, attribute: .trailing,
            multiplier: 1.0, constant: 16.0),
        NSLayoutConstraint(item: blueView, attribute: .trailing, relatedBy: .equal,
            toItem: view, attribute: .trailingMargin,
            multiplier: 1.0, constant: 0.0),

        NSLayoutConstraint(item: redView, attribute: .width, relatedBy: .equal,
            toItem: blueView, attribute: .width,
            multiplier: 1.0, constant: 0.0),
    ])
}

Because the constraints are established in code, it is easy to identify changes between revisions. Additionally, this version also works in tvOS:

Unfortunately, managing constraints programmatically is not particularly convenient. This simple layout required the definition of eight individual constraints. More complex layouts could quickly become untenable.

Stack Views

The UIStackView class, introduced in iOS 9, provides an alternative to managing constraints directly. Stack views automatically arrange their subviews in a vertical or horizontal line, and can be nested to create sophisticated layouts.

For example, the following code uses a stack view to produce results identical to the first two examples. It also works in both iOS and tvOS:

func createStackView() -> UIView {
    let view = UIStackView()

    view.isLayoutMarginsRelativeArrangement = true
    view.spacing = 16

    // Red view
    let redView = UIView()

    redView.backgroundColor = UIColor(red: 202.0 / 255.0, green: 53.0 / 255.0, blue: 56.0 / 255.0, alpha: 1.0)

    view.addArrangedSubview(redView)

    // Blue view
    let blueView = UIView()

    blueView.backgroundColor = UIColor(red: 59.0 / 255.0, green: 85.0 / 255.0, blue: 162.0 / 255.0, alpha: 1.0)

    view.addArrangedSubview(blueView)

    // Width constraint
    NSLayoutConstraint.activate([
        NSLayoutConstraint(item: redView, attribute: .width, relatedBy: .equal,
            toItem: blueView, attribute: .width,
            multiplier: 1.0, constant: 0.0),
    ])

    return view
} 

While it is arguably more readable than the previous version, this example is still somewhat verbose. It also still requires the explicit creation of a layout constraint to manage the width relationship, which is not ideal.

Layout Views

MarkupKit is an open-source framework for simplifying development of native iOS and tvOS applications. Among other things, it provides the following collection of view classes, whose sole responsibility is managing the size and position of their respective subviews:

  • LMRowView – arranges subviews in a horizontal line
  • LMColumnView – arranges subviews in a vertical line
  • LMLayerView – arranges subviews in layers, like a stack of transparencies
  • LMAnchorView – optionally anchors subviews to one or more edges

These classes use layout constraints internally, allowing developers to easily take advantage of auto layout while eliminating the need to manage constraints directly.

For example, the following code uses an instance of LMRowView to replicate the results produced by the previous examples. The weight property is used to ensure that the views are the same width. This property, which is added to UIView by MarkupKit, specifies the amount of excess space the view would like to be given within its parent view, relative to all other weights. Since both views are assigned a weight of 1, they will each be given 1 / (1 + 1), or one-half, of the available space:

func createRowViewProgrammatically() -> UIView {
    let view = LMRowView()

    view.spacing = 16

    // Red view
    let redView = UIView()

    redView.backgroundColor = LMViewBuilder.colorValue("#CA3538")
    redView.weight = 1.0

    view.addArrangedSubview(redView)

    // Blue view
    let blueView = UIView()

    blueView.backgroundColor = LMViewBuilder.colorValue("#3B55A2")
    blueView.weight = 1.0

    view.addArrangedSubview(blueView)

    return view
}

Like stack views, layout views are easy to work with programmatically, and are supported in both iOS and tvOS. However, unlike stack views, no manual constraint is manipulation required; MarkupKit uses the defined weight values to automatically establish the width relationship.

Additionally, the colorValue(_) method of MarkupKit's LMViewBuilder class is used to simplify color assignment in this example. The logic is still a bit verbose though, and could become difficult to manage as view complexity increases.

Markup

MarkupKit's namesake feature is its support for declarative view construction. Using markup, the view hiearchy created in the previous example can be represented entirely as follows:

<LMRowView spacing="16">
    <UIView backgroundColor="#ca3538" weight="1"/>
    <UIView backgroundColor="#3b55a2" weight="1"/>
</LMRowView>

The markup is loaded using the view(withName:owner:root:) method of LMViewBuilder. This method is similar to the loadNibNamed(_:owner:options:) method of the NSBundle class, and returns the root element of the view hiearchy declared in the document:

func createRowViewDeclaratively() -> UIView {
    return LMViewBuilder.view(withName: "ViewController", owner: self, root: nil)!
}

Like the previous version, this example works in both iOS and tvOS. However, unlike all of the preceding examples, this version is extremely concise. It is also much more readable: the element hierarchy declared in the document parallels the resulting view hiearchy, making it easy to understand the relationships between views.

For example, the periodic table shown below was constructed using a combination of MarkupKit-provided layout views and UILabel instances:

Creating this view in Interface Builder would be an arduous task. Creating it programmatically would be even more difficult. However, in markup it is almost trivial. The complete source code for this example can be found here.

Using markup can also help promote a clear separation of responsibility within an application. Most, if not all, aspects of a view's presentation can be specified in the view declaration, leaving the view controller responsible solely for managing the view's behavior.

Summary

This article provided an overview of how layout constraints are typically managed in an iOS application, and discussed some alternatives that can significantly simplify the task of working with auto layout, including stack views, layout views, and markup.

For more information, please see the MarkupKit README.

Building a Simple Barcode Scanner in iOS

Although near-field communication (NFC) technologies such as Apple Pay are beginning to gain traction as a means of inter-device communication, visual communication mechanisms such as barcodes (both 1D and 2D) are still widely used across a broad range of industries.

This tutorial demonstrates how to easily incorporate barcode scanning functionality into an iOS application. The sample application will use the AVFoundation framework to capture and analyze barcode images using the device's camera. iOS 10, macOS 10.12, and Xcode 8 are required.

Create the Xcode Project

The first step is to create the Xcode project we'll be using to build the example app.

  • Open Xcode and select File | New | Project from the menu.
  • In the project template dialog, select iOS > Single View Application and click "Next".
  • Name the product "BarcodeScanner" and fill in the remaining fields as appropriate for your team and organization. Ensure that Swift is selected as the development language and click "Next".
  • Save the project to an appropriate location on your system.

Although it doesn't actually do anything yet, you should now be able to run the application by selecting your device in the toolbar and clicking the "Run" button or by pressing Command-R. Note that, since the application will use the camera, it needs to be run on an actual device and must be signed. Make sure that an appropriate development team is selected in the Signing section of the General tab for the "BarcodeScanner" target before attempting to run the app.

Add the CameraView Class

Before we can display the camera preview to the user, we need to create a class to represent the camera view.

  • Select ViewController.swift in the Project Navigator.
  • Add the following line to the imports section:
    import AVFoundation
  • Add the following class declaration immediately before the ViewController class that was automatically generated by Xcode:
    class CameraView: UIView {
        override class var layerClass: AnyClass {
            get {
                return AVCaptureVideoPreviewLayer.self
            }
        }
    
        override var layer: AVCaptureVideoPreviewLayer {
            get {
                return super.layer as! AVCaptureVideoPreviewLayer
            }
        }
    }

This class extends UIView and overrides the layerClass property to specify that the view will be backed by an instance of AVCaptureVideoPreviewLayer. It also overrides the layer property to cast the return value to AVCaptureVideoPreviewLayer. This will make it easier to access the properties of the preview layer later.

Add the Camera View to the View Controller

Next, we'll add the camera view to the view controller.

  • In the ViewController class, declare a member variable to contain the camera view. Since we'll be creating the view instance programmatically, we don't need to tag it as an outlet:
    var cameraView: CameraView!
  • Override the loadView() method to initialize the view:
    override func loadView() {
        cameraView = CameraView()
    
        view = cameraView
    }

Although the camera view will now be visible when we run the app, it won't yet show anything but a black rectangle. We'll fix this in the next section.

Configure the Capture Session

In order to get the camera view to actually reflect what the camera is seeing, we need to connect it to an AV capture session. We'll use a dispatch queue to execute the more expensive session operations so the UI isn't blocked while waiting for them to complete.

  • Add member variables for the capture session and dispatch queue to ViewController:
    let session = AVCaptureSession()
    let sessionQueue = DispatchQueue(label: AVCaptureSession.self.description(), attributes: [], target: nil)
  • Add the AVCaptureMetadataOutputObjectsDelegate protocol to the view controller class:
    class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate { 
        ...
    } 
  • Add the following code to viewDidLoad() to initialize the capture session. For this example, we'll be configuring the session to recognize two barcode types – EAN-13 (aka "UPC") codes and QR codes:
    session.beginConfiguration()
    
    let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    
    if (videoDevice != nil) {
        let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice)
    
        if (videoDeviceInput != nil) {
            if (session.canAddInput(videoDeviceInput)) {
                session.addInput(videoDeviceInput)
            }
        }
    
        let metadataOutput = AVCaptureMetadataOutput()
    
        if (session.canAddOutput(metadataOutput)) {
            session.addOutput(metadataOutput)
    
            metadataOutput.metadataObjectTypes = [
                AVMetadataObjectTypeEAN13Code,
                AVMetadataObjectTypeQRCode
            ]
    
            metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        }
    }
    
    session.commitConfiguration()
    
    cameraView.layer.session = session
    cameraView.layer.videoGravity = AVLayerVideoGravityResizeAspectFill
  • Add the following additional code to viewDidLoad() to set the initial camera orientation:
    let videoOrientation: AVCaptureVideoOrientation
    switch UIApplication.shared.statusBarOrientation {
        case .portrait:
            videoOrientation = .portrait
    
        case .portraitUpsideDown:
            videoOrientation = .portraitUpsideDown
    
        case .landscapeLeft:
            videoOrientation = .landscapeLeft
    
        case .landscapeRight:
            videoOrientation = .landscapeRight
    
        default:
            videoOrientation = .portrait
    }
    
    cameraView.layer.connection.videoOrientation = videoOrientation

Add Camera Usage Description to Info.plist

Use of the camera in an iOS application requires the user's permission. In order for iOS to ask for permission, we need to provide a string explaining what the application plans to do with the camera.

  • Add the camera usage description to Info.plist:
        <key>NSCameraUsageDescription</key>
        <string>to scan barcodes</string>

The application still doesn't do much, but it will now at least prompt the user for permission to access the camera:

Start and Stop the Capture Session

In order for the application to actually display what the camera is seeing, we need to start the capture session. We'll do this when the view appears. We'll also stop the session when the view disappears.

  • Add the following methods to ViewController to start and stop session capture:
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
    
        sessionQueue.async {
            self.session.startRunning()
        }
    }
    
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
    
        sessionQueue.async {
            self.session.stopRunning()
        }
    }

While it isn't capable of scanning barcodes yet, the application will now at least correctly show the camera preview:

Handle Orientation Changes

Although it now displays the preview, the application doesn't yet respond to changes in orientation. Next, we'll add code to update the camera orientation when the device is rotated.

  • Add the following method to ViewController to update the preview orientation when the device orientation changes:
    override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
        super.viewWillTransition(to: size, with: coordinator)
    
        // Update camera orientation
        let videoOrientation: AVCaptureVideoOrientation
        switch UIDevice.current.orientation {
            case .portrait:
                videoOrientation = .portrait
    
            case .portraitUpsideDown:
                videoOrientation = .portraitUpsideDown
    
            case .landscapeLeft:
                videoOrientation = .landscapeRight
    
            case .landscapeRight:
                videoOrientation = .landscapeLeft
    
            default:
                videoOrientation = .portrait
        }
    
        cameraView.layer.connection.videoOrientation = videoOrientation
    }

Now, when the device is rotated, the preview will reflect the correct orientation.

Capture Barcode Values

Finally, we're ready to add the code that actually captures barcode values. We'll do this using the captureOutput(_:didOutputMetadataObjects:from:) method of the AVCaptureMetadataOutputObjectsDelegate protocol.

  • Add the following method to ViewController:
    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [Any]!, from connection: AVCaptureConnection!) {
        if (metadataObjects.count > 0 && metadataObjects.first is AVMetadataMachineReadableCodeObject) {
            let scan = metadataObjects.first as! AVMetadataMachineReadableCodeObject
    
            let alertController = UIAlertController(title: "Barcode Scanned", message: scan.stringValue, preferredStyle: .alert)
    
            alertController.addAction(UIAlertAction(title: "OK", style: .default, handler:nil))
    
            present(alertController, animated: true, completion: nil)
        }
    }

When a barcode is recognized, the application will now extract the associated value and present it to the user in an alert view:

Summary

This tutorial demonstrated how to easily incorporate barcode scanning functionality into an iOS application using the AVFoundation framework. For more ways to simplify iOS app development, please see my projects on GitHub:

  • MarkupKit – Declarative UI for iOS and tvOS
  • HTTP-RPC – Lightweight multi-platform REST

Printing Continuous Content in iOS

I’ve recently been working on an application that needs to generate printed receipts, and I’ve been using AirPrint to handle the output. Overall, I’ve found that AirPrint works really well, and I’ve had little trouble incorporating it into my app.

However, one challenge I’ve run into is producing continuous content. AirPrint seems to be geared more towards paginated content, and it doesn’t appear to work particularly well with the roll-based print media typically found in receipt printers.

After struggling with this for the better part of a day, I finally came up with this solution:

class ContinuousPageRenderer : UIPrintPageRenderer, UIPrintInteractionControllerDelegate {
    let attributedText: NSAttributedString

    let margin: CGFloat = 72.0 * 0.125

    init(attributedText: NSAttributedString) {
        self.attributedText = attributedText

        super.init()

        let printFormatter = UISimpleTextPrintFormatter(attributedText: attributedText)

        printFormatter.perPageContentInsets = UIEdgeInsets(top: margin, left: margin, bottom: margin, right: margin)

        addPrintFormatter(printFormatter, startingAtPageAt: 0)
    }

    func printInteractionController(_ printInteractionController: UIPrintInteractionController, cutLengthFor paper: UIPrintPaper) -> CGFloat {
        let size = CGSize(width: paper.printableRect.width - margin * 2, height: 0)

        let boundingRect = attributedText.boundingRect(with: size, options: [
            .usesLineFragmentOrigin,
            .usesFontLeading
        ], context: nil)

        return boundingRect.height + margin * 2
    }
}

This class provides a renderer for producing continuous output based on the content of an attributed string. Internally, it uses an instance of UISimpleTextPrintFormatter to format the output. A 1/8″ border, represented by the margin constant, is established around the generated content.

The class conforms to the UIPrintInteractionControllerDelegate protocol and provides an implementation for the printInteractionController(_:cutLengthFor:) method, which, despite its somewhat misleading name, actually appears to control the length of the generated page. In this case, the cut length is determined by calculating the bounding rectangle of the attributed text using the current printable area minus the page margins.

In order to correctly calculate the page size, a instance of this class must be set both as the print page renderer and the delegate of the print interaction controller; for example:

// Generate attributed text
let attributedText = NSMutableAttributedString()

let text = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\n"
let attributes = [NSFontAttributeName: UIFont.systemFont(ofSize: 10)]

attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))
attributedText.append(NSAttributedString(string: text, attributes: attributes))

// Print attributed text
let continuousPageRenderer = ContinuousPageRenderer(attributedText: attributedText)

printInteractionController.printPageRenderer = continuousPageRenderer
printInteractionController.delegate = continuousPageRenderer

Without the printInteractionController(_:cutLengthFor:) method, AirPrint attempts to break the content up into pages that, on my system, appear to have the same aspect ratio as “US Letter” (8 1/2 x 11) stock:

However, with the delegate method, the page size is correctly determined based on the length of the content: