Manufacturers that produce products using fused filament fabrication (FFF) 3D printing technologies have control of numerous build parameters. This includes the number of solid layers on the exterior of the product, the percentage of material filling the interior volume, and the many different types of infill patterns used to fill their interior.This study investigates the hypothesis that as the density of the part increases, the mechanical properties will improve at the expense of build time and the amount of material required.
Read More...Browse Articles
Recognition of animal body parts via supervised learning
The application of machine learning techniques has facilitated the automatic annotation of behavior in video sequences, offering a promising approach for ethological studies by reducing the manual effort required for annotating each video frame. Nevertheless, before solely relying on machine-generated annotations, it is essential to evaluate the accuracy of these annotations to ensure their reliability and applicability. While it is conventionally accepted that there cannot be a perfect annotation, the degree of error associated with machine-generated annotations should be commensurate with the error between different human annotators. We hypothesized that machine learning supervised with adequate human annotations would be able to accurately predict body parts from video sequences. Here, we conducted a comparative analysis of the quality of annotations generated by humans and machines for the body parts of sheep during treadmill walking. For human annotation, two annotators manually labeled six body parts of sheep in 300 frames. To generate machine annotations, we employed the state-of-the-art pose-estimating library, DeepLabCut, which was trained using the frames annotated by human annotators. As expected, the human annotations demonstrated high consistency between annotators. Notably, the machine learning algorithm also generated accurate predictions, with errors comparable to those between humans. We also observed that abnormal annotations with a high error could be revised by introducing Kalman Filtering, which interpolates the trajectory of body parts over the time series, enhancing robustness. Our results suggest that conventional transfer learning methods can generate behavior annotations as accurate as those made by humans, presenting great potential for further research.
Read More...Blockchain databases: Encrypted for efficient and secure NoSQL key-store
Although commonly associated with cryptocurrency, blockchains offer security that other databases could benefit from. These student authors tested a blockchain database framework, and by tracking runtime of four independent variables, they prove this framework is feasible for application.
Read More...A Crossover Study Comparing the Effect of a Processed vs. Unprocessed Diet on the Spatial Learning Ability of Zebrafish
The authors compared the short-term effects of processed versus unprocessed food on spatial learning and survival in zebrafish, given the large public concern regarding processed foods. By randomly assigning zebrafish to a diet of brine shrimp flakes (processed) or live brine shrimp (unprocessed), the authors show while there is no immediate effect on a fish's decision process between the two diets, there are significant correlations between improved learning and stress response with the unprocessed diet.
Read More...Examining the Accuracy of DNA Parentage Tests Using Computer Simulations and Known Pedigrees
How accurate are DNA parentage tests? In this study, the authors hypothesized that current parentage tests are reliable if the analysis involves only one or a few families of yellow perch fish Perca flavescens. Their results suggest that DNA parentage tests are reliable as long as the right methods are used, since these tests involve only one family in most cases, and that the results from parentage analyses of large populations can only be used as a reference.
Read More...Gradient boosting with temporal feature extraction for modeling keystroke log data
Although there has been great progress in the field of Natural language processing (NLP) over the last few years, particularly with the development of attention-based models, less research has contributed towards modeling keystroke log data. State of the art methods handle textual data directly and while this has produced excellent results, the time complexity and resource usage are quite high for such methods. Additionally, these methods fail to incorporate the actual writing process when assessing text and instead solely focus on the content. Therefore, we proposed a framework for modeling textual data using keystroke-based features. Such methods pay attention to how a document or response was written, rather than the final text that was produced. These features are vastly different from the kind of features extracted from raw text but reveal information that is otherwise hidden. We hypothesized that pairing efficient machine learning techniques with keystroke log information should produce results comparable to transformer techniques, models which pay more or less attention to the different components of a text sequence in a far quicker time. Transformer-based methods dominate the field of NLP currently due to the strong understanding they display of natural language. We showed that models trained on keystroke log data are capable of effectively evaluating the quality of writing and do it in a significantly shorter amount of time compared to traditional methods. This is significant as it provides a necessary fast and cheap alternative to increasingly larger and slower LLMs.
Read More...Impact of carbon number and atom number on cc-pVTZ Hartree-Fock Energy and program runtime of alkanes
It's time-consuming to complete the calculations that are used to study nuclear reactions and energy. To uncover which computational chemistry tools are useful for this challenge, Pan, Vaiyakarnam, Li, and McMahan investigated whether the Python-based Simulations of Chemistry Frameworkâs Hartree-Fock (PySCF) method is an efficient and accurate way to assess alkane molecules.
Read More...The effect of activation function choice on the performance of convolutional neural networks
With the advance of technology, artificial intelligence (AI) is now applied widely in society. In the study of AI, machine learning (ML) is a subfield in which a machine learns to be better at performing certain tasks through experience. This work focuses on the convolutional neural network (CNN), a framework of ML, applied to an image classification task. Specifically, we analyzed the performance of the CNN as the type of neural activation function changes.
Read More...Use of yogurt bacteria as a model surrogate to compare household cleaning solutions
While resources on the safety of household cleaning products are plentiful, measures of efficacy of these cleaning chemicals against bacteria and viruses remain without standardization in the consumer market. The COVID pandemic has exasperated this knowledge gap, stoking the growth of misinformation and misuse surrounding household cleaning chemicals. Arriving at a time dire for sanitization standardization, the authors of this paper have created a quantifying framework for consumers by comparing a wide range of household cleaning products in their efficacy against bacteria generated by a safe and easily replicable yogurt model.
Read More...Refinement of Single Nucleotide Polymorphisms of Atopic Dermatitis related Filaggrin through R packages
In the United States, there are currently 17.8 million affected by atopic dermatitis (AD), commonly known as eczema. It is characterized by itching and skin inflammation. AD patients are at higher risk for infections, depression, cancer, and suicide. Genetics, environment, and stress are some of the causes of the disease. With the rise of personalized medicine and the acceptance of gene-editing technologies, AD-related variations need to be identified for treatment. Genome-wide association studies (GWAS) have associated the Filaggrin (FLG) gene with AD but have not identified specific problematic single nucleotide polymorphisms (SNPs). This research aimed to refine known SNPs of FLG for gene editing technologies to establish a causal link between specific SNPs and the diseases and to target the polymorphisms. The research utilized R and its Bioconductor packages to refine data from the National Center for Biotechnology Information's (NCBI's) Variation Viewer. The algorithm filtered the dataset by coding regions and conserved domains. The algorithm also removed synonymous variations and treated non-synonymous, frameshift, and nonsense separately. The non-synonymous variations were refined and ordered by the BLOSUM62 substitution matrix. Overall, the analysis removed 96.65% of data, which was redundant or not the focus of the research and ordered the remaining relevant data by impact. The code for the project can also be repurposed as a tool for other diseases. The research can help solve GWAS's imprecise identification challenge. This research is the first step in providing the refined databases required for gene-editing treatment.
Read More...