New features of iOS 11 SDK that developers need to know

New features of iOS 11 SDK that developers need to know

I am too old to stay up late to watch WWDC, but I still got up early because of my baby's crying and my mother's urging to change his diaper. So I watched the WWDC 2017 Keynote while it was still "hot". It was similar to previous years. Although WWDC is a developer conference, the Keynote is not specifically for us developers. It also has functions such as company status and new product releases. As a technician, the next session may be more meaningful. If I have to use one sentence to evaluate the content presented in this year's Keynote, it is small-step innovation. In terms of major technologies, it can be said that only ARKit is worth studying, but we still saw updates such as cross-app drag and drop and the new Files application that further broke through the original shackles of iOS (I won't mention iMessage transfers, my great China is so powerful and has been leading the world in mobile payment for at least three years). I believe that iOS 11, especially with the new hardware, will bring a good experience to users.

As an iOS developer, as in previous years, I have sorted out the areas that may need attention.

New framework

There are two major frameworks newly added to the SDK, namely Core ML, which is responsible for simplifying and integrating machine learning, and ARKit, which is used to create augmented reality (AR) applications.

Core ML

Since the emergence of AlphaGo, deep learning has undoubtedly become a hot topic in the industry. Last year, Google also changed its strategy from Mobile-first to AI-first. It can be said that almost all first-tier Internet companies are betting on AI. At present, machine learning, especially deep learning, seems to be the most promising path.

If you are not very familiar with machine learning, I think I can "overstep" and give a brief introduction here. You can first think of the machine learning model as a black box function. You give some input (it may be a paragraph of text, or a picture), and this function will give a specific output (such as the names of people and places in the text, or the store brands that appear in the picture, etc.). At the beginning, this model may be very rough and completely unable to give correct results, but you can use a large amount of existing data and correct results to train the model and even improve it. If the model used is optimized enough and the training volume is large enough, this black box model will not only have a high accuracy rate for the training data, but will also often give correct returns for unknown actual inputs. Such a model is a trained model that can be actually used.

Training a machine learning model is a very heavy task. The role of Core ML is more about converting the trained model into a form that iOS can understand, and "feeding" the model with new data to get the output. Although abstracting problems and creating models are not difficult, improving and training models can be said to be a lifelong study, and readers of this article may not be too interested in this. Fortunately, Apple provides a series of tools to convert various machine learning models into a form that Core ML can understand. With this, you can easily use the models trained by predecessors in your iOS app. In the past, you might have to find the model yourself, and then write some C++ code to call it across platforms, and it was difficult to take advantage of the GPU performance and Metal of iOS devices (unless you write some shaders to perform matrix operations yourself). Core ML has lowered the threshold for using models a lot.

Core ML drives the Vision framework for visual recognition in iOS and the semantic analysis APIs in Foundation. Ordinary developers can directly benefit from these high-level APIs, such as face image or text recognition. These contents also exist in previous versions of the SDK, but in the iOS 11 SDK, they are concentrated in a new framework and some more specific and low-level controls are opened up. For example, you can use the high-level interface in Vision, but at the same time specify the model used at the bottom level. This brings new possibilities to computer vision in iOS.

Most of the efforts of Google or Samsung on Android AI are to integrate services into their own applications. In comparison, Apple, based on its control over its own ecosystem and hardware, gives more choices to third-party developers.

ARKit

The AR demonstration at the Keynote was the only highlight. In iOS SDK 11, Apple brought a great gift to developers, especially AR-related developers, which is ARKit. AR is not a new technology, and games like Pokémon Go have also verified the potential of AR in games. However, apart from the IP and novelty, I personally think that Pokémon Go is not qualified to represent the potential of AR technology. The on-site demonstration showed us a possibility. At a glance, ARKit uses a single lens and a gyroscope to do a great job in recognizing planes and stabilizing virtual objects. It is almost certain that Apple, which did not do the earliest but only the latest, seems to have returned to the stage at this moment.

ARKit has greatly lowered the threshold for ordinary developers to play with AR, and it is also Apple's option to compete with VR at this stage. We can imagine that more AR games like Pokémon Go (virtual pets combined with reality are probably the easiest to think of) can be released with the help of ARKit and SceneKit, and even making multimedia that can display all aspects like AR movies based on the existing skills of iPad Pro may no longer be a mere dream.

Corresponding to this is a set of not very complicated APIs. The View involved is almost an extension of SceneKit, and the real-world positioning is also handled by the system. What developers need to do is to place virtual objects in the appropriate position on the screen and let the objects interact with each other. Using Core ML to recognize and interact with the actual objects in the camera can be said to have given various special effects cameras or photography apps full of imagination.

Xcode

Editors and Compilers

Speed ​​is life, and developers' lives are wasted waiting for compilation. Swift has been well received since its launch, but the slow compilation speed, occasional syntax prompts, inability to refactor and other shortcomings in the tool chain have become the most important black spots. The editor in Xcode 9 has been rewritten to support refactoring of Swift code (although it is still very basic), VCS has been given a more important position, and GitHub integration has been added, which allows wireless deployment and debugging on the same local area network.

The new compilation system is rewritten in Swift. After some comparisons, the compilation speed has indeed been greatly improved. Although I don’t know if it is due to the switch to Swift 4, the total compilation time of the company project I am working on has been shortened from three and a half minutes to about two and a half minutes, which can be said to be quite obvious.

The indexing system in Xcode 9 also uses a new engine, which is said to be able to search for *** in large projects up to 50 times faster. However, this may not be obvious because the project I participated in was not large enough. The Swift code in the project still faces the problem of losing color. This may be caused by the poor coordination between the indexing system and the compilation system. After all, it is still a beta version of the software. Maybe we should give the Xcode team more time (although it may be like this until the end).

Since the Swift 4 compiler also provides compatibility with Swift 3 (just set the Swift version in the Build Setting), if nothing unexpected happens, I might use Xcode 9 beta in my daily development, and then switch back to Xcode 8 when packaging and publishing. After all, saving a minute and a half on each full compilation is still a very tempting thing.

The quality of this beta version is unexpectedly good. Perhaps it is because the improvements in the past one or two years have been small and innovative, giving Apple's software team relatively ample time for development? In short, Xcode 9 beta now works very well.

Named Color

This is a change I really like. Now you can add colors in xcassets and then reference them in code or IB. It looks like this:

When using IB to build UI, one of the most annoying things is when the designer asks us to change the theme color. You may need to search everywhere for this color to replace it. But now you only need to change it in xcassets, and it will be reflected everywhere in IB.

Other notable changes

The rest are minor changes. I took a quick look and listed the ones I think are worth mentioning, with reference links attached.

  • Drag- A very standard set of iOS APIs. Not surprisingly, the iOS system helps us handle most of the work, and developers almost only need to deal with the results. UITextView and UITextField natively support dragging, and UICollectionView and UITableView have a series of dedicated delegates to indicate the occurrence and end of dragging. You can also define dragging behavior for any UIView subclass. Unlike dragging on Mac, iOS dragging fully respects multi-touch screens, so you may need to do some special processing for multiple dragging behaviors at a time.
  • FileProvider and FileProviderUI - Provides a set of interfaces similar to the Files app, allowing you to access files on the user's device or in the cloud. I believe it will become the standard for document-related apps in the future.
  • No more support for 32-bit apps - Although you can still run 32-bit apps in beta 1, Apple has clearly stated that support will be removed in the subsequent iOS 11 beta. So if you want your program to run on iOS 11 devices, recompiling it for 64-bit is a necessary step.
  • DeviceCheck - Developers who track users with advertising IDs every day now have a better option (of course, the premise is to do serious work). DeviceCheck allows you to communicate with Apple servers through your server and set two bits of data for a single device. In short, you use the DeviceCheck API on the device to generate a token, then send this token to your own server, and then your own server communicates with Apple's API to update or query the value of the device. The two bits of data are used to track information such as whether the user has received rewards.
  • PDFKit - This is a framework that has been around for a long time on macOS but has been late to iOS. You can use this framework to display and manipulate PDF files.
  • IdentityLookup - You can develop an app extension to intercept system SMS and MMS messages. When the system message app receives a text message from an unknown person, it will ask all enabled filter extensions. If the extension indicates that the message should be intercepted, then the message will not be delivered to you. The extension has the opportunity to access a pre-specified server to make a judgment (so you can get the user's text message content openly, but of course considering privacy, these accesses are anonymous and encrypted, and Apple also prohibits such extensions from writing in the container).
  • Core NFC - Provides basic NFC reading capabilities on iPhone 7 and iPhone 7 Plus. Looks promising, as long as you have a suitable NFC tag, the phone can read it. However, considering that it cannot be permanently installed in the background, its practicality is compromised. However, I am not very familiar with this area, and it may be more suitable for more scenarios.
  • Auto Fill - Get passwords from iCloud Keychain and then automatically fill in the passwords is now open to third-party developers. Username and password are added to the textContentType of UITextInputTraits. By configuring the content type of the appropriate text view or text field, you can get the auto-fill above the keyboard when you are asked to enter the username and password, helping users log in quickly.

That's all for now. I'll add more if I find anything interesting. If you think there are any other changes worth mentioning, please leave a comment and I'll add them.

<<:  The sixth session of the Aiti Tribe Technical Clinic

>>:  Deep learning has become a magic weapon for front-end development: automatically generating code based on UI design drawings

Recommend

WeChat Moments with more than nine photos can be turned into videos

On January 26, WeChat 8.0.18 for iOS was official...

This wave of QQ updates is much more fun than WeChat's "Tap Tap"

In order to take care of friends who have already...

User growth activity matrix for online education

Online education has developed rapidly in 2020 an...

10 new media tools that can improve work efficiency by 60%

I coined the term "device merchant" in ...

How do Internet finance platforms retain users?

Although new media operations cannot effectively ...

Real-life and analysis of community operations

The concept of community operation is very popula...

How to plan a successful event promotion?

Event operation , planning different marketing ac...

A complete guide to operating and promoting Tik Tok!

Do you know how to use Tik Tok, which has huge tr...

70-Lecture PPT Quick Guide, Creating a PPT Course Worth $1 Million

Course Catalog: Chapter 1: Xiaobai's Countera...

How to operate “private domain traffic” in circle of friends!

The topic of private domain traffic has become in...

App paid promotion, five factors that affect user registration conversion rate

With the development of mobile Internet, the numb...