Browse Articles

Large Language Models are Good Translators

Zeng et al. | Oct 16, 2024

Large Language Models are Good Translators

Machine translation remains a challenging area in artificial intelligence, with neural machine translation (NMT) making significant strides over the past decade but still facing hurdles, particularly in translation quality due to the reliance on expensive bilingual training data. This study explores whether large language models (LLMs), like GPT-4, can be effectively adapted for translation tasks and outperform traditional NMT systems.

Read More...

Comparison of three large language models as middle school math tutoring assistants

Ramanathan et al. | May 02, 2024

Comparison of three large language models as middle school math tutoring assistants
Image credit: Thirdman

Middle school math forms the basis for advanced mathematical courses leading up to the university level. Large language models (LLMs) have the potential to power next-generation educational technologies, acting as digital tutors to students. The main objective of this study was to determine whether LLMs like ChatGPT, Bard, and Llama 2 can serve as reliable middle school math tutoring assistants on three tutoring tasks: hint generation, comprehensive solution, and exercise creation.

Read More...

Gradient boosting with temporal feature extraction for modeling keystroke log data

Barretto et al. | Oct 04, 2024

Gradient boosting with temporal feature extraction for modeling keystroke log data
Image credit: Barretto and Barretto 2024.

Although there has been great progress in the field of Natural language processing (NLP) over the last few years, particularly with the development of attention-based models, less research has contributed towards modeling keystroke log data. State of the art methods handle textual data directly and while this has produced excellent results, the time complexity and resource usage are quite high for such methods. Additionally, these methods fail to incorporate the actual writing process when assessing text and instead solely focus on the content. Therefore, we proposed a framework for modeling textual data using keystroke-based features. Such methods pay attention to how a document or response was written, rather than the final text that was produced. These features are vastly different from the kind of features extracted from raw text but reveal information that is otherwise hidden. We hypothesized that pairing efficient machine learning techniques with keystroke log information should produce results comparable to transformer techniques, models which pay more or less attention to the different components of a text sequence in a far quicker time. Transformer-based methods dominate the field of NLP currently due to the strong understanding they display of natural language. We showed that models trained on keystroke log data are capable of effectively evaluating the quality of writing and do it in a significantly shorter amount of time compared to traditional methods. This is significant as it provides a necessary fast and cheap alternative to increasingly larger and slower LLMs.

Read More...

Grammatical Gender and Politics: A Comparison of French and English in Political Discourse

Zhang et al. | Jul 07, 2021

Grammatical Gender and Politics: A Comparison of French and English in Political Discourse

Grammatical gender systems are prevalent across many languages, and when comparing French and English the existence of this system becomes a strong distinction. There have been studies that attribute assigned grammatical gender with the ability to influence conceptualization (attributing gender attributes) of all nouns, thus affecting people's thoughts on a grand scale. We hypothesized that due to the influence of a grammatical gender system, French political discourse would have a large difference between the number of masculine and feminine nouns used. Specifically, we predicted there would be a larger ratio of feminine to masculine nouns in French political discourse than in non-political discourse when compared to English discourse. Through linguistic analysis of gendered nouns in French political writing, we found that there is a clear difference between the number of feminine versus masculine nouns, signaling a preference for a more “effeminate” language.

Read More...

Reddit v. Wall Street: Why Redditors beat Wall Street at its own game

Bhakar et al. | Sep 13, 2022

Reddit v. Wall Street: Why Redditors beat Wall Street at its own game

Here the authors investigated the motivation of a short squeeze of GameStop stock where users of the internet forum Reddit drove a sudden increase in GameStop stock price during the start of 2021. They relied on both qualitative and quantitative analyses where they tracked activity on the r/WallStreetBets subreddit in relation to mentions of GameStop. With these methods they found that while initially the short squeeze was driven by financial motivations, later on emotional motivations became more important. They suggest that social phenomena can be dynamic and evolve necessitating mixed method approaches to study them.

Read More...

Does language familiarity affect typing speed?

Shin et al. | Aug 23, 2024

Does language familiarity affect typing speed?

In cognitive psychology, typed responses are used to assess thinking skills and creativity, but research on factors influencing typing speed is limited. This study examined how language familiarity affects typing speed, hypothesizing that familiarity with a language would correlate with faster typing. Participants typed faster in English than Latin, with those unfamiliar with Latin showing a larger discrepancy between the two languages, though Latin education level did not significantly impact typing speed, highlighting the role of language familiarity in typing performance.

Read More...

A machine learning approach for abstraction and reasoning problems without large amounts of data

Isik et al. | Jun 25, 2022

A machine learning approach for abstraction and reasoning problems without large amounts of data

While remarkable in its ability to mirror human cognition, machine learning and its associated algorithms often require extensive data to prove effective in completing tasks. However, data is not always plentiful, with unpredictable events occurring throughout our daily lives that require flexibility by artificial intelligence utilized in technology such as personal assistants and self-driving vehicles. Driven by the need for AI to complete tasks without extensive training, the researchers in this article use fluid intelligence assessments to develop an algorithm capable of generalization and abstraction. By forgoing prioritization on skill-based training, this article demonstrates the potential of focusing on a more generalized cognitive ability for artificial intelligence, proving more flexible and thus human-like in solving unique tasks than skill-focused algorithms.

Read More...