Browse Articles

Collaboration beats heterogeneity: Improving federated learning-based waste classification

Chong et al. | Jul 18, 2023

Collaboration beats heterogeneity: Improving federated learning-based waste classification

Based on the success of deep learning, recent works have attempted to develop a waste classification model using deep neural networks. This work presents federated learning (FL) for a solution, as it allows participants to aid in training the model using their own data. Results showed that with less clients, having a higher participation ratio resulted in less accuracy degradation by the data heterogeneity.

Read More...

Reddit v. Wall Street: Why Redditors beat Wall Street at its own game

Bhakar et al. | Sep 13, 2022

Reddit v. Wall Street: Why Redditors beat Wall Street at its own game

Here the authors investigated the motivation of a short squeeze of GameStop stock where users of the internet forum Reddit drove a sudden increase in GameStop stock price during the start of 2021. They relied on both qualitative and quantitative analyses where they tracked activity on the r/WallStreetBets subreddit in relation to mentions of GameStop. With these methods they found that while initially the short squeeze was driven by financial motivations, later on emotional motivations became more important. They suggest that social phenomena can be dynamic and evolve necessitating mixed method approaches to study them.

Read More...

The most efficient position of magnets

Shin et al. | Mar 28, 2024

The most efficient position of magnets
Image credit: immo RENOVATION

Here, the authors investigated the most efficient way to position magnets to hold the most pieces of paper on the surface of a refrigerator. They used a regression model along with an artificial neural network to identify the most efficient positions of four magnets to be at the vertices of a rectangle.

Read More...

Artificial Intelligence Networks Towards Learning Without Forgetting

Kreiman et al. | Oct 26, 2018

Artificial Intelligence Networks Towards Learning Without Forgetting

In their paper, Kreiman et al. examined what it takes for an artificial neural network to be able to perform well on a new task without forgetting its previous knowledge. By comparing methods that stop task forgetting, they found that longer training times and maintenance of the most important connections in a particular task while training on a new one helped the neural network maintain its performance on both tasks. The authors hope that this proof-of-principle research will someday contribute to artificial intelligence that better mimics natural human intelligence.

Read More...

Search Articles

Search articles by title, author name, or tags

Clear all filters

Popular Tags

Browse by school level