Browse Articles

Machine learning-based enzyme engineering of PETase for improved efficiency in plastic degradation

Gupta et al. | Jan 31, 2023

 Machine learning-based enzyme engineering of PETase for improved efficiency in plastic degradation
Image credit: Markus Spiske

Here, recognizing the recognizing the growing threat of non-biodegradable plastic waste, the authors investigated the ability to use a modified enzyme identified in bacteria to decompose polyethylene terephthalate (PET). They used simulations to screen and identify an optimized enzyme based on machine learning models. Ultimately, they identified a potential mutant PETases capable of decomposing PET with improved thermal stability.

Read More...

Can Green Tea Alleviate the Effects of Stress Related to Learning and Long-Term Memory in the Great Pond Snail (Lymnaea stagnalis)?

Elias et al. | Jan 30, 2021

Can Green Tea Alleviate the Effects of Stress Related to Learning and Long-Term Memory in the Great Pond Snail (<em>Lymnaea stagnalis</em>)?

Stress and anxiety have become more prevalent issues in recent years with teenagers especially at risk. Recent studies show that experiencing stress while learning can impair brain-cell communication thus negatively impacting learning. Green tea is believed to have the opposite effect, aiding in learning and memory retention. In this study, the authors used Lymnaea stagnalis , a pond snail, to explore the relationship between green tea and a stressor that impairs memory formation to determine the effects of both green tea and stress on the snails’ ability to learn, form, and retain memories. Using a conditioned taste aversion (CTA) assay, where snails are exposed to a sweet substance followed by a bitter taste with the number of biting responses being recorded, the authors found that stress was shown to be harmful to snail learning and memory for short-term, intermediate, and long-term memory.

Read More...

Artificial Intelligence Networks Towards Learning Without Forgetting

Kreiman et al. | Oct 26, 2018

Artificial Intelligence Networks Towards Learning Without Forgetting

In their paper, Kreiman et al. examined what it takes for an artificial neural network to be able to perform well on a new task without forgetting its previous knowledge. By comparing methods that stop task forgetting, they found that longer training times and maintenance of the most important connections in a particular task while training on a new one helped the neural network maintain its performance on both tasks. The authors hope that this proof-of-principle research will someday contribute to artificial intelligence that better mimics natural human intelligence.

Read More...

An explainable model for content moderation

Cao et al. | Aug 16, 2023

An explainable model for content moderation

The authors looked at the ability of machine learning algorithms to interpret language given their increasing use in moderating content on social media. Using an explainable model they were able to achieve 81% accuracy in detecting fake vs. real news based on language of posts alone.

Read More...