I am too old to stay up late to watch WWDC, but I still got up early because of my baby's crying and my mother's urging to change his diaper. So I watched the WWDC 2017 Keynote while it was still "hot". It was similar to previous years. Although WWDC is a developer conference, the Keynote is not specifically for us developers. It also has functions such as company status and new product releases. As a technician, the next session may be more meaningful. If I have to use one sentence to evaluate the content presented in this year's Keynote, it is small-step innovation. In terms of major technologies, it can be said that only ARKit is worth studying, but we still saw updates such as cross-app drag and drop and the new Files application that further broke through the original shackles of iOS (I won't mention iMessage transfers, my great China is so powerful and has been leading the world in mobile payment for at least three years). I believe that iOS 11, especially with the new hardware, will bring a good experience to users. As an iOS developer, as in previous years, I have sorted out the areas that may need attention. New framework There are two major frameworks newly added to the SDK, namely Core ML, which is responsible for simplifying and integrating machine learning, and ARKit, which is used to create augmented reality (AR) applications. Core ML Since the emergence of AlphaGo, deep learning has undoubtedly become a hot topic in the industry. Last year, Google also changed its strategy from Mobile-first to AI-first. It can be said that almost all first-tier Internet companies are betting on AI. At present, machine learning, especially deep learning, seems to be the most promising path. If you are not very familiar with machine learning, I think I can "overstep" and give a brief introduction here. You can first think of the machine learning model as a black box function. You give some input (it may be a paragraph of text, or a picture), and this function will give a specific output (such as the names of people and places in the text, or the store brands that appear in the picture, etc.). At the beginning, this model may be very rough and completely unable to give correct results, but you can use a large amount of existing data and correct results to train the model and even improve it. If the model used is optimized enough and the training volume is large enough, this black box model will not only have a high accuracy rate for the training data, but will also often give correct returns for unknown actual inputs. Such a model is a trained model that can be actually used. Training a machine learning model is a very heavy task. The role of Core ML is more about converting the trained model into a form that iOS can understand, and "feeding" the model with new data to get the output. Although abstracting problems and creating models are not difficult, improving and training models can be said to be a lifelong study, and readers of this article may not be too interested in this. Fortunately, Apple provides a series of tools to convert various machine learning models into a form that Core ML can understand. With this, you can easily use the models trained by predecessors in your iOS app. In the past, you might have to find the model yourself, and then write some C++ code to call it across platforms, and it was difficult to take advantage of the GPU performance and Metal of iOS devices (unless you write some shaders to perform matrix operations yourself). Core ML has lowered the threshold for using models a lot. Core ML drives the Vision framework for visual recognition in iOS and the semantic analysis APIs in Foundation. Ordinary developers can directly benefit from these high-level APIs, such as face image or text recognition. These contents also exist in previous versions of the SDK, but in the iOS 11 SDK, they are concentrated in a new framework and some more specific and low-level controls are opened up. For example, you can use the high-level interface in Vision, but at the same time specify the model used at the bottom level. This brings new possibilities to computer vision in iOS. Most of the efforts of Google or Samsung on Android AI are to integrate services into their own applications. In comparison, Apple, based on its control over its own ecosystem and hardware, gives more choices to third-party developers. ARKit The AR demonstration at the Keynote was the only highlight. In iOS SDK 11, Apple brought a great gift to developers, especially AR-related developers, which is ARKit. AR is not a new technology, and games like Pokémon Go have also verified the potential of AR in games. However, apart from the IP and novelty, I personally think that Pokémon Go is not qualified to represent the potential of AR technology. The on-site demonstration showed us a possibility. At a glance, ARKit uses a single lens and a gyroscope to do a great job in recognizing planes and stabilizing virtual objects. It is almost certain that Apple, which did not do the earliest but only the latest, seems to have returned to the stage at this moment. ARKit has greatly lowered the threshold for ordinary developers to play with AR, and it is also Apple's option to compete with VR at this stage. We can imagine that more AR games like Pokémon Go (virtual pets combined with reality are probably the easiest to think of) can be released with the help of ARKit and SceneKit, and even making multimedia that can display all aspects like AR movies based on the existing skills of iPad Pro may no longer be a mere dream. Corresponding to this is a set of not very complicated APIs. The View involved is almost an extension of SceneKit, and the real-world positioning is also handled by the system. What developers need to do is to place virtual objects in the appropriate position on the screen and let the objects interact with each other. Using Core ML to recognize and interact with the actual objects in the camera can be said to have given various special effects cameras or photography apps full of imagination. Xcode Editors and Compilers Speed is life, and developers' lives are wasted waiting for compilation. Swift has been well received since its launch, but the slow compilation speed, occasional syntax prompts, inability to refactor and other shortcomings in the tool chain have become the most important black spots. The editor in Xcode 9 has been rewritten to support refactoring of Swift code (although it is still very basic), VCS has been given a more important position, and GitHub integration has been added, which allows wireless deployment and debugging on the same local area network. The new compilation system is rewritten in Swift. After some comparisons, the compilation speed has indeed been greatly improved. Although I don’t know if it is due to the switch to Swift 4, the total compilation time of the company project I am working on has been shortened from three and a half minutes to about two and a half minutes, which can be said to be quite obvious. The indexing system in Xcode 9 also uses a new engine, which is said to be able to search for *** in large projects up to 50 times faster. However, this may not be obvious because the project I participated in was not large enough. The Swift code in the project still faces the problem of losing color. This may be caused by the poor coordination between the indexing system and the compilation system. After all, it is still a beta version of the software. Maybe we should give the Xcode team more time (although it may be like this until the end). Since the Swift 4 compiler also provides compatibility with Swift 3 (just set the Swift version in the Build Setting), if nothing unexpected happens, I might use Xcode 9 beta in my daily development, and then switch back to Xcode 8 when packaging and publishing. After all, saving a minute and a half on each full compilation is still a very tempting thing. The quality of this beta version is unexpectedly good. Perhaps it is because the improvements in the past one or two years have been small and innovative, giving Apple's software team relatively ample time for development? In short, Xcode 9 beta now works very well. Named Color This is a change I really like. Now you can add colors in xcassets and then reference them in code or IB. It looks like this: When using IB to build UI, one of the most annoying things is when the designer asks us to change the theme color. You may need to search everywhere for this color to replace it. But now you only need to change it in xcassets, and it will be reflected everywhere in IB. Other notable changes The rest are minor changes. I took a quick look and listed the ones I think are worth mentioning, with reference links attached.
That's all for now. I'll add more if I find anything interesting. If you think there are any other changes worth mentioning, please leave a comment and I'll add them. |
<<: The sixth session of the Aiti Tribe Technical Clinic
On January 26, WeChat 8.0.18 for iOS was official...
In order to take care of friends who have already...
Online education has developed rapidly in 2020 an...
Whether it is user operation , product operation ...
I coined the term "device merchant" in ...
Although new media operations cannot effectively ...
The concept of community operation is very popula...
Event operation , planning different marketing ac...
Do you know how to use Tik Tok, which has huge tr...
Today, the Changsha epidemic suddenly became a ho...
Course Catalog: Chapter 1: Xiaobai's Countera...
The topic of private domain traffic has become in...
With the development of mobile Internet, the numb...
Isn’t it that product sales have encountered a bo...