Camera and Gallery Image Upload by Alamofire

Update notation: This tutorial has been updated to Xcode 9.3, iOS 11.3, Swift 4.1 and Alamofire four.seven.0 by Ron Kliffer. The original tutorial was written by Aaron Douglas.

Alamofire is a Swift-based HTTP networking library for iOS and macOS. Information technology provides an elegant interface on top of Apple tree's Foundation networking stack that simplifies a number of common networking tasks.

Alamofire provides chainable request/response methods, JSON parameter and response serialization, authentication, and many other features.

In this Alamofire tutorial, you lot'll use Alamofire to perform basic networking tasks similar uploading files and requesting data from a 3rd-political party RESTful API.

Alamofire's elegance comes from the fact it was written from the ground up in Swift and does not inherit annihilation from its Objective-C counterpart, AFNetworking.

You should have a conceptual understanding of HTTP networking and some exposure to Apple'due south networking classes such every bit URLSession.

While Alamofire does obscure some implementation details, it's good to have some background noesis if you e'er need to troubleshoot your network requests.

Getting Started

Use the Download Materials button at the superlative or lesser of this tutorial to download the starter projection.

Note: Alamofire is normally integrated using CocoaPods. It has already been installed for you in the downloaded projects.

The app for this Alamofire tutorial is named PhotoTagger. When consummate, it will let y'all select an epitome from your library (or photographic camera if you're running on an bodily device) and upload the image to a third-party service called Imagga. This service will perform some image recognition tasks to come up up with a list of tags and main colors for the image:

alamofire tutorial

This project uses CocoaPods, and then open information technology using the PhotoTagger.xcworkspace file.

Note:To learn more nearly CocoaPods, check out this tutorial by Joshua Greene, published right here on the site.

Build and run the project. You'll encounter the following:

alamofire tutorial

Click Select Photo and cull a photo. The background image volition be replaced with the prototype y'all chose.

Open Main.storyboard and yous'll run across the boosted screens for displaying tags and colors have been added for you. All that remains is to upload the image and fetch the tags and colors.

The Imagga API

Imagga is an prototype recognition Platform-equally-a-Service that provides prototype tagging APIs for developers and businesses to build scalable, paradigm-intensive cloud apps. You can play effectually with a demo of their car-tagging service here.

Y'all'll need to create a gratis programmer account with Imagga for this Alamofire tutorial. Imagga requires an authorization header in each HTTP asking so only people with an business relationship tin employ their services. Go to https://imagga.com/auth/signup/hacker and fill out the course. Afterward y'all create your account, check out the dashboard:

Listed down in the Say-so section is a secret token you'll apply later. You'll need to include this information with every HTTP request as a header.

Note: Brand certain you copy the whole secret token, be sure to curl over to the right and verify y'all copied everything.

Y'all'll be using Imagga's content endpoint to upload the photos, tagging endpoint for the image recognition and colors endpoint for color identification. You tin can read all about the Imagga API at http://docs.imagga.com.

Remainder, HTTP, JSON — What'due south that?

If y'all're coming to this tutorial with very niggling experience in using tertiary-political party services over the Net, y'all might be wondering what all those acronyms hateful! :]

HTTP is the application protocol, or ready of rules, web sites use to transfer data from the web server to your screen. You've seen HTTP (or HTTPS) listed in the front of every URL yous blazon into a web browser. You might have heard of other application protocols, such as FTP, Telnet, and SSH. HTTP defines several request methods, or verbs, the customer (your web browser or app) utilise to indicate the desired activity:

  • Become: Retrieves data, such as a web page, but doesn't change any data on the server.
  • HEAD: Identical to Get merely only sends back the headers and none of the actual data.
  • Mail service: Sends information to the server, ordinarily used when filling a form and clicking submit.
  • PUT: Sends information to the specific location provided.
  • DELETE: Deletes data from the specific location provided.

Rest, or REpresentational State Transfer, is a prepare of rules for designing consequent, piece of cake-to-use and maintainable spider web APIs. REST has several architecture rules that enforce things such as not persisting states across requests, making requests cacheable, and providing uniform interfaces. This makes it easy for app developers like you to integrate the API into your app, without needing to track the state of data across requests.

JSON stands for JavaScript Object Notation. It provides a straightforward, human-readable and portable mechanism for transporting data between two systems. JSON has a limited number of data types: string, boolean, assortment, object/lexicon, null and number. In that location'southward no distinction between integers and decimals.

There are a few native choices for converting your objects in retention to JSON and vice-versa: the good old JSONSerialization course and the newly-added JSONEncoder and JSONDecoder classes. In addition, in that location are numerous third party libraries that help with handling JSON. You'll use one of them, SwiftyJSON in this tutorial.

The combination of HTTP, REST and JSON make up a skilful portion of the spider web services available to you as a developer. Trying to understand how every piffling piece works can exist overwhelming. Libraries similar Alamofire can assist reduce the complexity of working with these services, and go you up and running faster than you could without their help.

What is Alamofire Skilful For?

Why do you need Alamofire at all? Apple tree already provides URLSession and other classes for downloading content via HTTP, and then why complicate things with some other tertiary party library?

The brusque answer is Alamofire is based on URLSession, simply it frees you from writing boilerplate code which makes writing networking code much easier. Yous can access data on the Internet with very footling endeavour, and your lawmaking will exist much cleaner and easier to read.

At that place are several major functions available with Alamofire:

  • Alamofire.upload: Upload files with multipart, stream, file or information methods.
  • Alamofire.download: Download files or resume a download already in progress.
  • Alamofire.request: Every other HTTP request non associated with file transfers.

These Alamofire methods are global within Alamofire and then y'all don't have to instantiate a class to utilise them. There are underlying pieces to Alamofire that are classes and structs, like SessionManager, DataRequest, and DataResponse; however, you don't need to fully empathize the unabridged structure of Alamofire to get-go using it.

Here's an example of the same networking operation with both Apple's URLSession and Alamofire'due south request role:

// With URLSession public func fetchAllRooms(completion: @escaping ([RemoteRoom]?) -> Void) {   guard let url = URL(string: "http://localhost:5984/rooms/_all_docs?include_docs=true") else {     completion(zero)     return   }    var urlRequest = URLRequest(url: url,                               cachePolicy: .reloadIgnoringLocalAndRemoteCacheData,                               timeoutInterval: ten.0 * 1000)   urlRequest.httpMethod = "Get"   urlRequest.addValue("awarding/json", forHTTPHeaderField: "Take")    let task = urlSession.dataTask(with: urlRequest)   { (data, response, error) -> Void in     guard error == nil else {       print("Error while fetching remote rooms: \(Cord(describing: error)")       completion(nil)       return     }      baby-sit allow data = data,       permit json = try? JSONSerialization.jsonObject(with: data) equally? [String: Any] else {         print("Nil data received from fetchAllRooms service")         completion(nada)         return     }      baby-sit let rows = json?["rows"] every bit? [[String: Whatever]] else {       print("Malformed data received from fetchAllRooms service")       completion(nil)       return     }      let rooms = rows.flatMap { roomDict in return RemoteRoom(jsonData: roomDict) }     completion(rooms)   }    task.resume() }        

Versus:

// With Alamofire func fetchAllRooms(completion: @escaping ([RemoteRoom]?) -> Void) {   guard permit url = URL(string: "http://localhost:5984/rooms/_all_docs?include_docs=true") else {     completion(aught)     return   }   Alamofire.asking(url,                     method: .get,                     parameters: ["include_docs": "true"])   .validate()   .responseJSON { response in     guard response.result.isSuccess else {       print("Error while fetching remote rooms: \(Cord(describing: response.upshot.error)")       completion(null)       return     }      guard let value = response.result.value as? [String: Whatsoever],       allow rows = value["rows"] as? [[String: Any]] else {         print("Malformed data received from fetchAllRooms service")         completion(nil)         return     }      let rooms = rows.flatMap { roomDict in return RemoteRoom(jsonData: roomDict) }     completion(rooms)   } }        

You can see the required setup for Alamofire is shorter and it's much clearer what the function does. You lot deserialize the response with responseJSON(options:completionHandler:) and calling validate() to verify the response status lawmaking is in the default acceptable range between 200 and 299 simplifies error condition handling.

Now the theory is out of the style, it's fourth dimension to showtime using Alamofire.

Uploading Files

Open ViewController.swift and add the post-obit to the top, below import SwiftyJSON:

import Alamofire        

This lets y'all use the functionality provided by the Alamofire module in your code, which you'll be doing soon!

Next, go to imagePickerController(_:didFinishPickingMediaWithInfo:) and add the following to the end, right before the call to dismiss(animated:):

// one takePictureButton.isHidden = true progressView.progress = 0.0 progressView.isHidden = false activityIndicatorView.startAnimating()  upload(prototype: image,        progressCompletion: { [weak self] percent in         // 2         self?.progressView.setProgress(percent, animated: true)   },        completion: { [weak self] tags, colors in         // iii         cocky?.takePictureButton.isHidden = false         self?.progressView.isHidden = true         cocky?.activityIndicatorView.stopAnimating()                      self?.tags = tags         self?.colors = colors                      // 4         self?.performSegue(withIdentifier: "ShowResults", sender: self) })        

Everything with Alamofire is asynchronous, which ways yous'll update the UI in an asynchronous manner:

  1. Hide the upload button, and show the progress view and activeness view.
  2. While the file uploads, you telephone call the progress handler with an updated percent. This updates the progress indicator of the progress bar.
  3. The completion handler executes when the upload finishes. This sets the controls back to their original state.
  4. Finally the Storyboard advances to the results screen when the upload completes, successfully or not. The user interface doesn't alter based on the error condition.

Next, find upload(image:progressCompletion:completion:) at the lesser of the file. It is currently just a method stub, so give it the following implementation:

func upload(image: UIImage,             progressCompletion: @escaping (_ per centum: Float) -> Void,             completion: @escaping (_ tags: [String]?, _ colors: [PhotoColor]?) -> Void) {   // i   guard let imageData = UIImageJPEGRepresentation(image, 0.5) else {     impress("Could non go JPEG representation of UIImage")     render   }    // two   Alamofire.upload(multipartFormData: { multipartFormData in     multipartFormData.append(imageData,                              withName: "imagefile",                              fileName: "image.jpg",                              mimeType: "paradigm/jpeg")   },                    to: "http://api.imagga.com/v1/content",                    headers: ["Authorization": "Basic thirty"],                    encodingCompletion: { encodingResult in   }) }        

Hither's what'southward happening:

  1. The image that's beingness uploaded needs to be converted to a Data instance.
  2. Here you convert the JPEG data hulk (imageData) into a MIME multipart request to send to the Imagga content endpoint.

Note: Make sure to supercede Basic thirty with the actual potency header taken from the Imagga dashboard.

Next, add the post-obit to the encodingCompletion closure:

switch encodingResult { instance .success(let upload, _, _):   upload.uploadProgress { progress in     progressCompletion(Float(progress.fractionCompleted))   }   upload.validate()   upload.responseJSON { response in   } case .failure(permit encodingError):   print(encodingError) }        

This chunk of code calls the Alamofire upload role and passes in a modest calculation to update the progress bar as the file uploads. It and so validates the response has a status lawmaking in the default acceptable range between 200 and 299.

Annotation: Prior to Alamofire 4 it was not guaranteed progress callbacks were chosen on the principal queue. Beginning with Alamofire iv, the new progress callback API is always called on the main queue.

Next, add together the following lawmaking to the upload.responseJSON closure:

// one guard response.outcome.isSuccess,   let value = response.effect.value else {     impress("Error while uploading file: \(Cord(describing: response.result.fault))")     completion(nil, zilch)     return }                          // 2 let firstFileID = JSON(value)["uploaded"][0]["id"].stringValue print("Content uploaded with ID: \(firstFileID)")                          //3 completion(nil, null)        

Here's a pace-by-step caption of the above code:

  1. Check that the upload was successful, and the effect has a value; if not, print the error and call the completion handler.
  2. Using SwiftyJSON, retrieve the firstFileID from the response.
  3. Call the completion handler to update the UI. At this signal, you don't have any downloaded tags or colors, and then simply call this with no data.

Note: Every response has a Result enum with a value and type. Using automatic validation, the result is considered a success when it returns a valid HTTP Code between 200 and 299 and the Content Type is of a valid type specified in the Have HTTP header field.

Y'all tin can perform manual validation past adding .validate options equally shown below:

Alamofire.asking("https://httpbin.org/get", parameters: ["foo": "bar"])   .validate(statusCode: 200..<300)   .validate(contentType: ["awarding/json"])   .response { response in   // response handling code }          

The UI won't testify an fault if you striking an error during the upload; information technology only returns no tags or colors to the user. This isn't the all-time user experience, but it'south fine for this tutorial.

Build and run your projection; select an prototype and watch the progress bar change every bit the file uploads. Y'all should see a note similar the following in your console when the upload completes:

ImaggaUploadConsole

Congratulations, you've successfully uploaded a file over the Interwebs!

Retrieving Data

The next step subsequently uploading the prototype to Imagga is to fetch the tags Imagga produces afterward it analyzes the photograph.

Add the post-obit method to the ViewController extension below upload(image:progress:completion:):

func downloadTags(contentID: String, completion: @escaping ([Cord]?) -> Void) {   // i   Alamofire.request("http://api.imagga.com/v1/tagging",                     parameters: ["content": contentID],                     headers: ["Authorization": "Basic xxx"])      // 2     .responseJSON { response in       baby-sit response.consequence.isSuccess,         permit value = response.result.value else {           print("Error while fetching tags: \(String(describing: response.result.error))")           completion(null)           render       }              // 3       let tags = JSON(value)["results"][0]["tags"].array?.map { json in         json["tag"].stringValue       }                // 4       completion(tags)   } }        

Here's a step-by-pace explanation of the above lawmaking:

  1. Perform an HTTP GET request against the tagging endpoint, sending the URL parameter content with the ID yous received after the upload. Again, exist sure to supercede Bones thirty with your bodily potency header.
  2. Cheque that the response was successful, and the consequence has a value; if not, impress the error and telephone call the completion handler.
  3. Using SwiftyJSON, retrieve the raw tags assortment from the response. Iterate over each dictionary object in the tags assortment, retrieving the value associated with the tag central.
  4. Call the completion handler passing in the tags received from the service.

Next, go back to upload(image:progress:completion:) and replace the call to the completion handler in the success status with the following:

self.downloadTags(contentID: firstFileID) { tags in   completion(tags, nothing) }        

This simply sends along the tags to the completion handler.

Build and run your project; select a photograph and yous should see something similar to the post-obit appear:

alamofire tutorial

Pretty slick! That Imagga is 1 smart API. :] Next, you lot'll fetch the colors of the paradigm.

Add the following method to the ViewController extension below downloadTags(contentID:completion:):

func downloadColors(contentID: Cord, completion: @escaping ([PhotoColor]?) -> Void) {   // one.   Alamofire.request("http://api.imagga.com/v1/colors",                     parameters: ["content": contentID],                     headers: ["Dominance": "Basic 30"])     .responseJSON { response in       // 2       guard response.result.isSuccess,         allow value = response.result.value else {           impress("Fault while fetching colors: \(String(describing: response.issue.mistake))")           completion(zippo)           return       }                // 3       permit photoColors = JSON(value)["results"][0]["info"]["image_colors"].assortment?.map { json in         PhotoColor(carmine: json["r"].intValue,                    greenish: json["g"].intValue,                    blue: json["b"].intValue,                    colorName: json["closest_palette_color"].stringValue)       }                // 4       completion(photoColors)   } }        

Taking each numbered comment in turn:

  1. Perform an HTTP GET request against the colors endpoint, sending the URL parameter content with the ID you received subsequently the upload. Again, exist sure to supercede Basic xxx with your actual authorisation header.
  2. Check that the response was successful, and the result has a value; if non, impress the error and call the completion handler.
  3. Using SwiftyJSON, think the image_colors array from the response. Iterate over each lexicon object in the image_colors array, and transform information technology into a PhotoColor object. This object pairs colors in the RGB format with the color name as a string.
  4. Call the completion handler, passing in the photoColors from the service.

Finally, go back to upload(epitome:progress:completion:) and supersede the phone call to downloadTags(contentID:) in the success condition with the following:

self.downloadTags(contentID: firstFileID) { tags in   self.downloadColors(contentID: firstFileID) { colors in     completion(tags, colors)   } }        

This nests the operations of uploading the image, downloading tags and downloading colors.

Build and run your project again; this fourth dimension, you should see the returned color tags when you lot select the Colors button:

alamofire tutorial

This uses the RGB colors you mapped to PhotoColor structs to change the background colour of the view. You've now successfully uploaded an image to Imagga and fetched data from two unlike endpoints. You've come a long mode, just there's some room for improvement in how you're using Alamofire in PhotoTagger.

Improving PhotoTagger

Yous probably noticed some repeated lawmaking in PhotoTagger. If Imagga released v2 of their API and deprecated v1, PhotoTagger would no longer function and you'd accept to update the URL in each of the three methods. Similarly, if your authorisation token changed you'd be updating it all over the identify.

Alamofire provides a simple method to eliminate this code duplication and provide centralized configuration. The technique involves creating a struct conforming to URLRequestConvertible and updating your upload and request calls.

Create a new Swift file by clicking File\New\File... and selecting Swift file under iOS. Click Adjacent, name the file ImaggaRouter.swift, select the Group PhotoTagger with the yellow folder icon and click Create.

Add the post-obit to your new file:

import Alamofire  public enum ImaggaRouter: URLRequestConvertible {   // one   enum Constants {     static let baseURLPath = "http://api.imagga.com/v1"     static let authenticationToken = "Basic thirty"   }      // 2   case content   case tags(Cord)   case colors(String)      // 3   var method: HTTPMethod {     switch self {     case .content:       return .mail     case .tags, .colors:       return .get     }   }      // 4   var path: String {     switch self {     case .content:       return "/content"     case .tags:       return "/tagging"     example .colors:       return "/colors"     }   }      // 5   var parameters: [String: Whatever] {     switch self {     case .tags(let contentID):       return ["content": contentID]     case .colors(allow contentID):       render ["content": contentID, "extract_object_colors": 0]     default:       render [:]     }   }      // half-dozen   public func asURLRequest() throws -> URLRequest {     let url = endeavor Constants.baseURLPath.asURL()          var request = URLRequest(url: url.appendingPathComponent(path))     asking.httpMethod = method.rawValue     asking.setValue(Constants.authenticationToken, forHTTPHeaderField: "Authorization")     asking.timeoutInterval = TimeInterval(10 * 1000)          return try URLEncoding.default.encode(request, with: parameters)   } }        

Hither's a step-by-stride explanation of the above code:

  1. Declare constants to agree the Imagga base URL and your Basic thirty with your actual dominance header.
  2. Declare the enum cases. Each case corresponds to an api endpoint.
  3. Return the HTTP method for each api endpoint.
  4. Return the path for each api endpoint.
  5. Return the parameters for each api endpoint.
  6. Employ all of the above components to create a URLRequest for the requested endpoint.

Now all your boilerplate code is in single place, should you ever demand to update it.

Get back to ViewController.swift and in upload(image:progress:completion:) replace:

Alamofire.upload(   multipartFormData: { multipartFormData in     multipartFormData.append(imageData,                              withName: "imagefile",                              fileName: "prototype.jpg",                              mimeType: "paradigm/jpeg")   },   to: "http://api.imagga.com/v1/content",   headers: ["Authorization": "Basic xxx"],        

with the following:

Alamofire.upload(multipartFormData: { multipartFormData in   multipartFormData.suspend(imageData,                            withName: "imagefile",                            fileName: "paradigm.jpg",                            mimeType: "image/jpeg") },   with: ImaggaRouter.content,        

Adjacent replace the telephone call for Alamofire.request in downloadTags(contentID:completion:) with:

Alamofire.request(ImaggaRouter.tags(contentID))        

Finally, update the telephone call to Alamofire.request in downloadColors(contentID:completion:) with:

Alamofire.request(ImaggaRouter.colors(contentID))        

Annotation: Be sure to get out the responseJSON handlers in identify for both of the previous edits.

Build and run for the final time; everything should office simply every bit earlier, which means you've refactored everything without breaking your app. However, you don't have to go through your unabridged source code if anything on the Imagga integration ever changes: APIs, your potency token, parameters, etc. Awesome chore!

Where To Go From Here?

Y'all can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial. Don't forget to replace your authorization token as advisable!

This tutorial covered the very basics. You tin take a deeper dive by looking at the documentation on the Alamofire site at https://github.com/Alamofire/Alamofire.

Also, y'all can accept some time to learn more about Apple's URLSession which Alamofire uses nether the hood:

  • Apple WWDC 2015 - 711 - Networking with NSURLSession
  • Apple URL Session Programming Guide
  • Ray Wenderlich - NSURLSession Tutorial

Please share any comments or questions nigh this tutorial in the forum word beneath!

torreslegreasing.blogspot.com

Source: https://www.raywenderlich.com/35-alamofire-tutorial-getting-started

0 Response to "Camera and Gallery Image Upload by Alamofire"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel