Deep learning is a subset of machine learning that uses a variety of methods to achieve a core goal of artificial intelligence research: to enable computers to fully model our world and thus demonstrate the "intelligence" in our understanding. From a basic conceptual perspective, deep learning methods tend to have very basic common features. Deep learning algorithms interpret raw data through multiple processing layers. Each layer takes the output of the previous layer as input and creates a more abstract representation. Therefore, as long as more data is fed into the right algorithm, the algorithm can usually come up with more reasoning and scenarios that are closely related to relevant rules and features, and use them as reference when dealing with new similar situations. The sci-fi-like Word Lens feature provided by Google Translate is a product supported by a set of deep learning algorithms. Deep Mind's recently popular AlphaGo also swept the Go world with the powerful ability of deep learning - but specifically, AlphaGo's winning algorithm is not a pure neural network, but a combination of deep reinforcement learning and tree search, a classic AI basic technology. Deep learning is a powerful approach to solving computational problems that are too complex to be solved directly with simple algorithms such as image classification or natural language processing. However, deep learning is still of limited practical use. Most industries that use machine learning technology today have failed to fully realize the potential of deep learning and related methods, and have only borrowed some of their best practices. For example, those who have been following the recent changes in deep learning may have heard that John Giannandrea, the former head of AI at Google, has taken over the company's search department (which may even completely disrupt the entire SEO field). Recommendation systems powered by deep learning technology - personalized future solutions It is certain that deep learning will also drive the next major leap forward in personalization. Personalization has become a core area for e-commerce companies, publishers, and marketing agencies because of its proven ability to increase sales, increase engagement, and improve the overall user experience. If data is the fuel for personalization, then recommendation systems are its powerhouse. Advances in these algorithms will have a profound impact on this area and the online experience of platform users. Here, we will look at three specific areas to explore how deep learning can complement and improve existing recommendation systems. Incorporate content into the referral process Item-to-item recommendations are a standard approach in recommendation systems. This means that when an e-commerce site or publisher site makes a recommendation, the recommendation is based on other similar items that the user has viewed before. A typical approach to address this requirement is business logic based on metadata (another typical data source is user interaction, i.e., “Users who bought this item also bought…” on Amazon). However, poor metadata quality has become a common bottleneck that restricts its actual effectiveness, mainly due to the lack of value or lack of system allocation of metadata. In this case, even with perfect meta tags, such data can only express indirect connections between actual items. With the help of deep learning, we can incorporate actual content attributes such as content (including images, videos, and text) into the recommendation process. With deep learning, the relationship between items will match the algorithm's more comprehensive understanding of the product, and rely less on manual labeling and extensive interaction history. Spotify's recommendation system is quite commendable in this regard. The company introduced deep learning into the recommendation system in 2014 research, aiming to provide more diverse song recommendations and help users create an improved personalized experience. The music streaming service has previously used collaborative filtering in its recommendation system. But Sander Dieleman, a PhD student and intern at Spotify, sees this as the biggest flaw in the current functionality, because this highly data-dependent approach will inevitably miss some less popular, emerging artists and songs that are not well known to the public. Therefore, Dieleman used a deep learning algorithm to select 30-second excerpts from each of 500,000 songs and analyze the music itself. This continuous multi-layer learning network can grasp more complex and constant song features, and its basic idea is very similar to image classification. In fact, "based on the fully connected layers above the output layer of this network, the learned filters can eventually grasp certain sub-categories of music more selectively", including gospel music, Chinese pop music or deep house music. In practice, this means that such a system can effectively make music recommendations based solely on the similarity of songs (a property that is very important for users to put together personalized playlists). Although it is not yet clear whether Spotify will incorporate these findings into its actual algorithms, the significance of this experiment itself is still highly commendable. Solving the cold start problem Cold starts are the enemy of recommender systems, and can have serious consequences for both users and items. For users, a cold start means that the system has little or no information about their behavior and preferences. For items, a cold start means that there is a lack of item-to-item data that can guide user interactions (although we still have metadata, it is not enough to provide truly detailed recommendations). With the content-based approach described above, the actual effect of item cold starts will be significantly improved because it ensures that the recommender system is less dependent on transaction and interaction data. However, creating a meaningful, personalized experience for new users is another thorny problem that is difficult to solve by simply collecting more information. This situation is common on e-commerce sites or online stores with a broad product portfolio, where customers visit randomly over time with completely different browsing goals. They may initially decide to buy a microwave, but look for a mobile phone on their next visit. In this case, the data collected on the first session is almost completely irrelevant to the second session. An interesting approach to solving the user cold start problem is to build session-based or item-to-session recommendations. In simple terms, this means that the system no longer relies on the customer's overall interaction history, but instead breaks down the relevant data into multiple different sessions, and builds an interest model for the user based on the clickstream of a specific session. In this way, future recommendation systems may no longer rely on carefully crafted customer profiles collected over months or even years, but can provide reasonable and relevant recommendations to users after they have been operating the website for a period of time. Although this area has not been thoroughly researched, it does offer a huge opportunity to improve the personalized online experience. Researchers at Gravity R&D, working on the EU-funded CrowdRec project, co-authored a paper describing how to use a convolutional neural network (RNN) approach to provide conversation-based recommendations. This is the first research paper to use deep learning techniques to implement recommendations in a conversation-based way, and the results show that their approach is more effective than the current state-of-the-art algorithmic techniques. Four Moments of Truth The four moments of truth are brief periods of time when customers make decisions based on corporate communications and available information. These decisions are heavily influenced by long-term considerations, personal preferences, and brand loyalty, but are also guided by momentary impressions. In the face of these moments of truth, powerful deep learning systems may have a viable approach to influence the human decision-making process - an insight that is undoubtedly quite novel. For example, we all know that beautiful product images drive sales (there are entire industries trying to take beautiful images of rental homes or food). But on the other hand, we are also looking forward to using deep learning-based image analysis methods to evaluate how visual features in product images can have a significant positive impact on sales. Admittedly, the content covered in this article is not exhaustive. Personalization is undoubtedly one of the most urgent needs of the Internet industry today, and deep learning technology almost certainly has great potential in this field. Therefore, companies that hope to maintain their competitive advantage naturally need to keep an eye on the development and trends of this technology. Original link: http://dataconomy.com/2017/06/deep-learning-personalizing-internet/ |
Recently, the State Council held a meeting to fur...
In the past two years, mobile Internet has been e...
“Why can’t the product be put on the shelves?” “W...
Kuaishou is a well-known short video application ...
Google confirmed at last night's new product ...
A beautiful appearance but a boring soul can only...
The rapid development of mobile Internet has prom...
The interaction effect of the product has a signi...
Today I will talk about my understanding of data....
Let me first briefly talk about the steps of list...
In the era of universal IP, many brands are tryin...
Today, the editor invited a senior community oper...
Nowadays, with the continuous development of bran...
Jazz dance, also known as American modern dance, ...
Based on the four elements of game theory, this p...