Content at a glance: The United Nations predicts that the world population is expected to reach 9.1 billion in 2050, and the global demand for food will increase by 70%. However, due to the uneven development of agriculture in the world, the food production in many regions cannot be accurately counted, so it is impossible to make reasonable plans for agricultural development in these regions. The existing food production statistics methods are difficult to promote or require a high level of technology. To this end, researchers at Kyoto University used convolutional neural networks (CNN) to analyze farmland photos and efficiently and accurately counted local food production, providing a new method to promote global agricultural development. Author | Xuecai Editor | Sanyang This article was first published on HyperAI WeChat public platform~ Global demand for food will increase by 70% by 2050, driven by population growth, rising incomes and widespread use of biofuels. However, due to global warming and biodiversity decline, food production around the world is extremely vulnerable to environmental changes and development is uneven among regions. Figure 1: Global cereal production map 2020 As we can see, China, the United States, India and Brazil are the main grain producing areas, while the grain production in the southern hemisphere is relatively low. Moreover, due to the low agricultural productivity in the southern hemisphere, its grain production is difficult to be accurately counted. Therefore, it is difficult for us to effectively evaluate the local agricultural productivity, let alone provide effective means to increase production. There are currently three commonly used methods for counting grain production, including self-reporting, actual harvesting and measurement, and remote sensing statistics. The first two methods are difficult to promote on a large scale, while the use of remote sensing technology is restricted by the local scientific and technological level. To this end, researchers at Kyoto University used convolutional neural networks (CNN) to analyze photos of farmland taken on the spot and then count the local grain yield. The results showed that the CNN model can quickly and accurately count the rice yield at the harvest and late maturity under different lighting conditions. This result has been published in Plant Phenomics. Paper link: https://spj.science.org/doi/10.34133/plantphenomics.0073 Experimental procedures Building a database: Rice canopy photos + grain yield The researchers collected photos of rice and grain yields in 20 farms in seven countries. When the rice matured, they used a digital camera to shoot vertically downward from a height of 0.8 to 0.9 meters above the rice canopy to obtain RGB photos of 1 m2 of rice. Note: The rice canopy is the top layer of dense rice branches and leaves, and is the main part of the plant for photosynthesis. Subsequently, they changed the shooting angle, time and period, and in some experiments, they removed the rice inflorescences one by one to explore the mechanism of CNN model prediction of yield. In the end, they obtained 22,067 RGB photos of 462 rice varieties from 4,820 shooting locations. The grain yield in the experiment is the coarse grain yield, including the total weight of rice grains and empty grains. The statistical grain yield is between 0.1 t/ha (tons per hectare) and 16.1 t/ha, showing a normal distribution, with an average yield of about 5.8 t/ha. Figure 2: Rice canopy image and grain yield distribution A: Distribution of coarse grain production in seven countries; B: Pie chart of average coarse grain production in different countries; C: Image of rice with the highest coarse grain yield; D: Image of rice with the lowest coarse grain yield. Yield prediction: Canopy photos + CNN → Grain yield The CNN model, loss function, and optimizer were deployed using the Python language and the PyTorch framework. The researchers then calculated the validation loss and relative root mean square error (rRMSE) when the model training was completed by combining different batch sizes and learning rates, and obtained the optimal batch size (32) and learning rate (0.0001) for the model. The CNN model has 5 convolutional layers in the Main Stream (MS) and 4 convolutional layers in the Branching Stream (BS). The pooling layers of the model include the Average Pooling layer and the Max Pooling layer. The activation function is mainly the Rectified Linear Unit (ReLU), and the Exponential Linear Unit (ELU) is used in some parts. Finally, the MS and BS are combined and the estimated grain yield is output through the ReLU layer. Figure 3: CNN model diagram The CNN model has a strong ability to resolve images. When the ground sampling distance (GSD, the actual distance corresponding to each pixel in the photo, which is the opposite of the resolution) is 0.2 cm/pixel, the correlation coefficient R2 between the CNN model's predicted results and the actual results is above 0.65. Even if the GSD increases to 3.2 cm/pixel, the model's R2 can remain above 0.55. Figure 4: Relationship between CNN model prediction results and GSD A: The relationship between the R2 of the CNN model and the GSD of the validation set and test set photos; B: Scatter plot of CNN model predicted output and actual output; C & D: Schematic photos of GSD of 0.2 cm/pixel and 3.2 cm/pixel. Furthermore, the researchers tested the CNN model using the data from the prediction set. The CNN model can distinguish the difference in rice yields between Takanari and Koshihikari in Tokyo, and the predicted data is close to the actual data. Figure 5: Actual yield (A) and predicted yield (B) of Gaocheng rice and Koshihikari rice The team then blocked the images to explore the mechanism by which the CNN model analyzed the images and predicted grain yields. They blocked specific areas of the photos with gray blocks and calculated the difference in yield predicted by the CNN model before and after blocking. Figure 6: Schematic diagram of occlusion experiment A: Photo before occlusion; B: Photo after occlusion; C: The weight of different areas of the photo on the predicted yield. The results showed that grain yield was positively correlated with the number of rice inflorescences, and negatively correlated with the proportion of elements such as stems, leaves, and ground in the picture. Therefore, the researchers verified the role of inflorescence in yield prediction through an inflorescence removal experiment. They picked two inflorescences from each rice plant, took photos and counted the coarse grain yield until all the inflorescences were removed. Figure 7: Inflorescence removal experiment and results A: Schematic diagram of the inflorescence removal experiment; B: Photograph after inflorescence removal; C: Line chart of expected output and actual output; D: Relationship between expected and actual yield during inflorescence removal. As the number of inflorescences decreased, the yield prediction results of the CNN model continued to decrease, finally falling to 1.6 t/ha. This experiment shows that the CNN model mainly judges grain yield based on the number of inflorescences in the photo. Robustness: camera angle, time and period After verifying the CNN model’s ability to predict food production, the researchers changed the shooting angle, time, and period to explore the robustness of the CNN model under different conditions. The shooting angle of the photos was between 20° and 90°, and the test interval was 10°. **The results show that the prediction accuracy of the CNN model increases with the increase of the shooting angle. **When the shooting error is 20°, the prediction result of the CNN model is -3.7-2.4 t/ha. When the shooting angle is 60°, the prediction error is between -0.45-2.44 t/ha, which is close to the prediction result at 90°. Figure 8: Shooting angle test and results A: Schematic diagram of shooting angle experiment; B: Photos taken from different shooting angles; C: The difference between the predicted yield and the actual yield from photos taken at different angles. The camera was then placed in a fixed position to take a picture of the farmland every 30 minutes to explore the impact of shooting time on the CNN model. The results showed that although the lighting environment changed, the prediction results of the CNN model for all-day photos were basically stable. Figure 9: Shooting time test and results A: Schematic diagram of the shooting time experiment; B: Photos taken at different shooting times; C: The prediction yield of the CNN model for photos taken at different times. Finally, the researchers explored the impact of the shooting period on the prediction results of the CNN model. After 50% of the rice had headed, they went to the farmland to collect photos every week and analyzed them with the CNN model. In the early stage of rice maturity, the predicted yield of the CNN model was lower than the actual yield at the harvest time because the inflorescence was not fully mature at this time. As time went by, the prediction results of the CNN model gradually approached the actual yield. Four weeks after 50% heading, the prediction results of the CNN model were basically stable and close to the actual yield. Figure 10: Tests and results during filming A: Photos taken at different shooting times, DAH stands for days after heading, DBH stands for days before harvest; B: Prediction results of the CNN model for photos taken at different times. Smart Agriculture: AI helps agricultural planning According to the United Nations, the world's population will reach about 9.1 billion in 2050. As the global population grows and incomes rise, the demand for food is also increasing. At the same time, the intensification, digitization and intelligence of agricultural production have continuously increased grain yields per mu. From 2000 to 2019, the global agricultural land area decreased by 3%, while the output of major crops increased by 52%, and the output of fruits and vegetables also increased by about 20%. The use of professional equipment such as large harvesters and drones allows farmers to plan their farmland accurately and conveniently. Technologies such as big data and the Internet of Things help farmers perceive the conditions of farmland in real time and automatically adjust the environment in the greenhouse. Deep learning and big models can predict the weather in advance to prevent extreme weather and alleviate the problem of traditional agriculture "relying on the weather for food". Figure 11: Schematic diagram of smart agriculture system However, as of 2021, the number of people affected by hunger worldwide has increased by about 46 million from the previous year to 828 million. Problems of unbalanced agricultural production and an imperfect system still exist and have even become more prominent. With the help of AI, we can make better plans for local agricultural development, promote balanced development of world agricultural production, and provide a satisfactory answer to solving the problem of global hunger. Reference Links: [1] https://www.fao.org/documents/card/en/c/cc2211en [2] https://www.deccanherald.com/opinion/smart-farming-tech-new-age-700994.html This article was first published on HyperAI WeChat public platform~ |
<<: Why did Oppenheimer become the leader of the Manhattan Project?
>>: Drones also "hitch a ride" - new drones carry missiles
One of the important reasons why many people choos...
This article lists the common forms of app promot...
Almost every basketball player, coach or fan beli...
Li Siguang was born in Huanggang, Hubei in Octobe...
The smart hardware industry is flourishing with a...
I give the following definition of user operation...
Keep-alive implementation principle This article ...
Everyone knows that due to the large population b...
This may be the most comprehensive beauty and ski...
Boss Fantong Dai: Introduction to the resources o...
: : : : : : : : : : : : : : : : : : : : : : : : : ...
How will 360 search promotion results be displaye...
What is the biggest difference between plants and...
This article mainly introduces how to open Taobao...
On November 4, 2019, NIO released its latest car ...