Advanced searches left 3/3

Ai Neural Network - PLOS

Summarized by Plex Scholar
Last Updated: 12 December 2022

* If you want to update the article please login/register

Rice nitrogen nutrition monitoring classification method based on the convolution neural network model: Direct detection of rice nitrogen nutritional status

According to the findings, most experts enter the farmland to determine leaf color and growth, as well as apply the correct amount of nitrogen fertilizer. We designed the Jiangxi rice nitrogen nutrition monitoring scheme based on a convolution neural network and using the same region rice canopy photo images in different generation spans. To enable automatic rice nitrogen feeding monitoring, we developed the Jiangxi rice nitrogen nutrition monitoring system. The results show that the same CNN model could distinguish the rice nitrogen nutrition status in various seasons, which would entirely abolish automatic differentiation of nitrogen nutrition status in this region as to guide the scientific nitrogen application of rice in this area. Inception, a CNN architecture model that can extend the network's coverage and extract higher-level functionality without changing the amount of computation required.

Source link: https://doi.org/10.1371/journal.pone.0273360


Modelling bark thickness for Scots pine (Pinus sylvestris L.) and common oak (Quercus robur L.) with recurrent neural networks

We recommend a new strategy based on a time series. To determine double bark thickness, we used a recurrent neural network to build the bark thickness model and compare it with stem taper curves adjusted to predict double bark thickness. The study contains 750 felled trees from common oak and 144 Scots pine tree species in Europe, representing dominant forest-forming tree species in Europe. We produced a recurrent ANN and calculated bark thickness along the stem based on the results. Levenberg-Marquardt and Scaled Conjugate Gradient performed various network designs with one- and two-time window delay and three learning algorithms. The results reveal that recurrent ANN is a universal method that provides the most accurate measurement of bark thickness at a specific stem height. The ANN recursive model had a benefit in estimating trees that were atypical for height as well as the stem's upper and lower parts.

Source link: https://doi.org/10.1371/journal.pone.0276798


Audit lead selection and yield prediction from historical tax data using artificial neural networks

Tax audits are a critical procedure used in all tax departments to ensure tax compliance and fairness. Here, we have developed an audit lead software based on artificial neural networks that have been trained and evaluated on an integrated database of 93,413 unique tax records from 8,647 restaurant establishments over ten years in Northern California, provided by the California Department of Tax and Fee Administration. This work demonstrates how data can be used to develop evidence-based auditing and validating empirical hypotheses, resulting in higher audit yields and more fair audit selection procedures.

Source link: https://doi.org/10.1371/journal.pone.0278121


Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation

To investigate the causes behind catastrophic forgetting and the role of sleep in preventing it, we used a spiking network. In synaptic weight space, new task training transferred the synaptic weight unit away from the manifold's image of old tasks, leading to forgetting. Author summary: Artificial neural networks can achieve extraordinary results in several domains. Sleep has been shown to play a key role in memory and learning by allowing prompt recall of previously learned memory patterns. To prevent forgetting old memories, we use a spiking neural network model that simulates sensory processing and reinforcement learning in animal brain. Here we show that interleaving new task training with sleep-like exercise improves the network's memory representation in synaptic weight space. Sleep makes this possible by replaying old memory traces without the explicit use of the old task data.

Source link: https://doi.org/10.1371/journal.pcbi.1010628


The geometry of representational drift in natural and artificial neural networks

Neurons in sensory domains encode/represent stimuli. In vivo two-photon calcium imaging, we investigate stimulus representations from fluorescence recordings in hundreds of neurons in the visual cortex's visual cortex using in vivo two-photon calcium imaging, and we corroborate previous reports that such representations change as experimental trials are repeated across days. In this research, we geometrically characterize representational drift in mice's primary visual cortex of mice in two open datasets from the Allen Institute and suggest a possible explanation for such drift. Both for passively delivered stimuli and as well as behaviorally relevant stimuli, we observe representational drift. The characteristics we see in the neural data are similar to those of artificial neural networks where representations are updated by continual learning in the absence of dropout, i. e. Therefore, we conclude that an important factor for the representational drift in biological networks is a lack of underlying dropout-like noise while still learning, and that such a procedure may be computationally efficient for the brain in the same way it is for artificial neural networks, e. g. Even during seemingly stable results, author summary: It has been shown that the neuronal representations of sensory information in the brain can vary, even during seemingly stable results. While continuing to adjust its components to maintain stable results, we then induce representational shifts in an artificial neural network by injecting it with several distinct types of noise. In addition, dropout is well-known for aiding artificial neural networks in learning more, potentially indicating a computational advantage over drift in the brain.

Source link: https://doi.org/10.1371/journal.pcbi.1010716

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions