Browse Articles

Transfer Learning for Small and Different Datasets: Fine-Tuning A Pre-Trained Model Affects Performance

Gupta et al. | Oct 18, 2020

Transfer Learning for Small and Different Datasets: Fine-Tuning A Pre-Trained Model Affects Performance

In this study, the authors seek to improve a machine learning algorithm used for image classification: identifying male and female images. In addition to fine-tuning the classification model, they investigate how accuracy is affected by their changes (an important task when developing and updating algorithms). To determine accuracy, a set of images is used to train the model and then a separate set of images is used for validation. They found that the validation accuracy was close to the training accuracy. This study contributes to the expanding areas of machine learning and its applications to image identification.

Read More...

Propagation of representation bias in machine learning

Dass-Vattam et al. | Jun 10, 2021

Propagation of representation bias in machine learning

Using facial recognition as a use-case scenario, we attempt to identify sources of bias in a model developed using transfer learning. To achieve this task, we developed a model based on a pre-trained facial recognition model, and scrutinized the accuracy of the model’s image classification against factors such as age, gender, and race to observe whether or not the model performed better on some demographic groups than others. By identifying the bias and finding potential sources of bias, his work contributes a unique technical perspective from the view of a small scale developer to emerging discussions of accountability and transparency in AI.

Read More...

Artificial Intelligence Networks Towards Learning Without Forgetting

Kreiman et al. | Oct 26, 2018

Artificial Intelligence Networks Towards Learning Without Forgetting

In their paper, Kreiman et al. examined what it takes for an artificial neural network to be able to perform well on a new task without forgetting its previous knowledge. By comparing methods that stop task forgetting, they found that longer training times and maintenance of the most important connections in a particular task while training on a new one helped the neural network maintain its performance on both tasks. The authors hope that this proof-of-principle research will someday contribute to artificial intelligence that better mimics natural human intelligence.

Read More...

Assessing and Improving Machine Learning Model Predictions of Polymer Glass Transition Temperatures

Ramprasad et al. | Mar 18, 2020

Assessing and Improving Machine Learning Model Predictions of Polymer Glass Transition Temperatures

In this study, the authors test whether providing a larger dataset of glass transition temperatures (Tg) to train the machine-learning platform Polymer Genome would improve its accuracy. Polymer Genome is a machine learning based data-driven informatics platform for polymer property prediction and Tg is one property needed to design new polymers in silico. They found that training the model with their larger, curated dataset improved the algorithm's Tg, providing valuable improvements to this useful platform.

Read More...

A comparative analysis of machine learning approaches for prediction of breast cancer

Nag et al. | May 11, 2021

A comparative analysis of machine learning approaches for prediction of breast cancer

Machine learning and deep learning techniques can be used to predict the early onset of breast cancer. The main objective of this analysis was to determine whether machine learning algorithms can be used to predict the onset of breast cancer with more than 90% accuracy. Based on research with supervised machine learning algorithms, Gaussian Naïve Bayes, K Nearest Algorithm, Random Forest, and Logistic Regression were considered because they offer a wide variety of classification methods and also provide high accuracy and performance. We hypothesized that all these algorithms would provide accurate results, and Random Forest and Logistic Regression would provide better accuracy and performance than Naïve Bayes and K Nearest Neighbor.

Read More...

The Effects of Confinement on the Associative Learning of Gallus gallus domesticus

Jaworsky et al. | Dec 23, 2019

The Effects of Confinement on the Associative Learning of <em>Gallus gallus domesticus</em>

This study aimed to determine if confinement affects associative learning in chickens. The research found that the difference in time lapsed before chickens began to consume cottage cheese before and after confinement was significant. These results suggest that confinement distresses chickens, as it impairs associative learning without inducing confusion.

Read More...

Search Articles

Search articles by title, author name, or tags

Clear all filters

Popular Tags

Browse by school level