Reinforcement learning (RL) is a form of machine learning that can be harnessed to develop artificial intelligence by exposing the intelligence to multiple generations of data. The study demonstrates how reply buffer reward mechanics can inform the creation of new pruning methods to improve RL efficiency.
Read More...Browse Articles
Using two-stage deep learning to assist the visually impaired with currency differentiation
Here, recognizing the difficulty that visually impaired people may have differentiating United States currency, the authors sought to use artificial intelligence (AI) models to identify US currencies. With a one-stage AI they reported a test accuracy of 89%, finding that multi-level deep learning models did not provide any significant advantage over a single-level AI.
Read More...Artificial Intelligence Networks Towards Learning Without Forgetting
In their paper, Kreiman et al. examined what it takes for an artificial neural network to be able to perform well on a new task without forgetting its previous knowledge. By comparing methods that stop task forgetting, they found that longer training times and maintenance of the most important connections in a particular task while training on a new one helped the neural network maintain its performance on both tasks. The authors hope that this proof-of-principle research will someday contribute to artificial intelligence that better mimics natural human intelligence.
Read More...Machine learning for retinopathy prediction: Unveiling the importance of age and HbA1c with XGBoost
The purpose of our study was to examine the correlation of glycosylated hemoglobin (HbA1c), blood pressure (BP) readings, and lipid levels with retinopathy. Our main hypothesis was that poor glycemic control, as evident by high HbA1c levels, high blood pressure, and abnormal lipid levels, causes an increased risk of retinopathy. We identified the top two features that were most important to the model as age and HbA1c. This indicates that older patients with poor glycemic control are more likely to show presence of retinopathy.
Read More...Using Artificial Intelligence to Forecast Continuous Glucose Monitor(CGM) readings for Type One Diabetes
People with Type One diabetes often rely on Continuous Blood Glucose Monitors (CGMs) to track their blood glucose and manage their condition. Researchers are now working to help people with Type One diabetes more easily monitor their health by developing models that will future blood glucose levels based on CGM readings. Jalla and Ghanta tackle this issue by exploring the use of AI models to forecast blood glucose levels with CGM data.
Read More...Quantitative analysis and development of alopecia areata classification frameworks
This article discusses Alopecia areata, an autoimmune disorder causing sudden hair loss due to the immune system mistakenly attacking hair follicles. The article introduces the use of deep learning (DL) techniques, particularly convolutional neural networks (CNN), for classifying images of healthy and alopecia-affected hair. The study presents a comparative analysis of newly optimized CNN models with existing ones, trained on datasets containing images of healthy and alopecia-affected hair. The Inception-Resnet-v2 model emerged as the most effective for classifying Alopecia Areata.
Read More...Entropy-based subset selection principal component analysis for diabetes risk factor identification
In this article, the authors looked at developing a strategy that would allow for earlier diagnosis of Diabetes as that improves long-term outcomes. They were able to find that BMI, tricep skin fold thickness, and blood pressure are the risk factors with the highest accuracy in predicting diabetes risk.
Read More...Artificial intelligence assisted violin performance learning
In this study the authors looked at the ability of artificial intelligence to detect tempo, rhythm, and intonation of a piece played on violin. Technology such as this would allow for students to practice and get feedback without the need of a teacher.
Read More...An improved video fingerprinting attack on users of the Tor network
The Tor network allows individuals to secure their online identities by encrypting their traffic, however it is vulnerable to fingerprinting attacks that threaten users' online privacy. In this paper, the authors develop a new video fingerprinting model to explore how well video streaming can be fingerprinted in Tor. They found that their model could distinguish which one of 50 videos a user was hypothetically watching on the Tor network with 85% accuracy, demonstrating that video fingerprinting is a serious threat to the privacy of Tor users.
Read More...Exploring the Wonders of the Early Universe: Green Pea Galaxies and Light Flux
Studying other galaxies can help us understand the origins of the universe. Here, the authors study a type of galaxies known as Green Peas gaining insights that could help inform our understanding of Lyman alpha emitters, one of the first types of galaxies that existed in the early universe.
Read More...