Content Words, Function Words, and LLM Analysis of Dictionaries

HARNAD:

Do you know — but please don’t just make it up — what percentage of the words in the vocabulary of a language are “content” words (“open class words”), like nouns, verbs, adjectives and adverbs (words that have referents) and what oercentage are “function” words (“closed class words”) like “and,” “if” “the” “is”, which do not have a referent, but just a syntactic use?

ChatGPT:

The division of words in a language into content words (open class) and function words (closed class) is a well-acknowledged linguistic distinction. The percentage of each in a language’s vocabulary can vary based on several factors, including the specific corpus being analyzed, the language in question, and its typological characteristics.

Generally speaking:

Content Words (Open Class): These words make up the vast majority of the lexicon in terms of sheer count. They are continuously evolving, with new words added frequently. In most corpora or dictionaries, the vast majority of entries (often upwards of 90%) would be content words.

Function Words (Closed Class): These words are limited in number but are used very frequently in actual speech and writing. In terms of usage frequency, function words can often dominate. For example, in English, words like “the,” “and,” and “is” are among the most frequently used, even though the total number of function words is a small fraction of the lexicon. While the exact percentage can vary, function words often comprise a tiny portion of a lexicon, perhaps often less than 5% in terms of distinct entries.

However, if one were to analyze a typical body of text (like a book or a collection of articles), function words would appear very frequently, and their percentage of total word occurrences would be much higher than their percentage of distinct entries in the lexicon. So, while they might represent a small fraction of the total vocabulary in terms of distinct words, their actual usage rate is much higher due to their pivotal role in grammar and sentence structure.

For a precise breakdown for a specific language or corpus, one would typically employ computational linguistic tools to perform a frequency analysis. The ratios mentioned above are broad generalizations, and the actual proportions could vary based on the specifics of the analysis and the language in question.

HARNAD:

Thanks. I’m not interested in the frequency of word tokens but the frequency of word types (e.g. as entries in a dictionary).

ChatGPT:

Understood, Professor Harnad.

When we consider word types (distinct words) as they might appear as entries in a dictionary:

Content Words (Open Class): The majority of dictionary entries will be content words. These encompass nouns, verbs, adjectives, and adverbs, and they represent the vast majority of distinct lexical items in a language. Based on various lexicons and dictionaries, content words can easily comprise upwards of 90-95% of the total word types in many languages.

Function Words (Closed Class): These are limited in number. In many languages, the distinct function words might make up as little as 5-10% of the dictionary entries, if not less. These are your conjunctions, prepositions, determiners, auxiliary verbs, pronouns, etc.

It should be noted that the exact percentages can vary based on the language and the comprehensiveness of the dictionary. Larger, more comprehensive dictionaries, which aim to include technical terms, regional dialectal variations, and so on, will likely have an even higher percentage of content words relative to function words. However, the general pattern of a significant majority of content words and a small minority of function words in terms of word types holds true for most languages.

HARNAD:

Are there any particularly notable differences between the proportion of content-word types in some specific languages?

ChatGPT:

The proportion of content-word types to function-word types can vary among languages, particularly when considering languages from different families or with distinct typological characteristics. Here are a few notable factors and examples that can influence these proportions:

Morphological Complexity: Languages with rich morphological systems might have fewer function words, as grammatical relations are often expressed through affixes rather than separate words.

Finnish and Turkish, for example, as agglutinative languages, often express what would be separate function words in English through case endings or other morphemes. This means that while they might have many word forms due to their morphology, they could have fewer distinct function word types.

Isolating vs. Synthetic Nature: Highly isolating languages might have a relatively higher proportion of function words because they rely more on separate words rather than morphemes to convey grammatical meaning.

Mandarin Chinese is often cited as a more isolating language, with a tendency to use separate particles or function words to indicate grammatical relations.

Polysynthetic Languages: In polysynthetic languages, entire sentences might be expressed as single words with complex morphology. This can blur the line between content and function words.

Some Native American languages, like those in the Inuit-Yupik family, fall into this category.

Analytic vs. Synthetic Spectrum: Languages on the analytic end of the spectrum rely on word order and function words to convey meaning, while those on the synthetic end use inflection. Analytic languages might have a higher proportion of function words, relatively speaking, than highly synthetic languages.

Historical Linguistic Changes: Over time, languages can shift along the analytic-synthetic spectrum. For instance, English has become more analytic over time, shedding many of its Old English inflections in favor of a greater reliance on function words and word order.

Pidgins and Creoles: Pidgin languages, which arise as simplified means of communication between speakers of different native languages, typically have a reduced lexicon and might have a higher proportion of function words relative to their total vocabulary. When Pidgins stabilize and become natively spoken, they evolve into Creoles, which can develop a more extensive lexicon, potentially shifting the balance between content and function words.

While these general observations provide an overview, it’s essential to note that the specific proportion of content-word types in any language’s dictionary would require a detailed lexical analysis. Moreover, languages are dynamic, and these proportions can evolve based on linguistic, social, and cultural changes.

HARNAD:

Another way to estimate the content/function word-type proportions is to count only the content-word type and make the assumption that all languages at a particular time should have about the same potential number of content words, since the necessary syntactic functions, also about the same for all languages, may or may not perform those functions as separate, countable function-word types.

ChatGPT:

Yes, by making certain assumptions and normalizing the analysis, we can provide a more comparative framework.

1. Assumption of Equivalence in Content-Word Potential Across Languages:

The premise here is that the experiential and conceptual world of human beings is relatively constant across different cultures and societies. Hence, irrespective of the language, humans should have words to denote common experiences, objects, actions, and qualities. Given this equivalence, we could postulate that the potential number of content words (like nouns for objects, verbs for actions, adjectives for qualities, etc.) should be roughly comparable across languages.

2. Variation in Expression of Syntactic Functions:

While the potential for content words might be consistent, languages vary widely in how they handle syntactic and grammatical functions. As you rightly pointed out, some languages might encapsulate these functions within the morphology of content words, while others might use separate function words. For instance, a language might use cases to denote relations that another language expresses using prepositions.

3. Counting Content-Word Types:

With the above assumptions in place, a comparative study could be carried out by counting the content-word types in representative dictionaries or corpora from different languages. This would give an estimate of the “active” content-word lexicon of those languages at that point in time.

4. Estimating Function-Word Potential:

Based on the number of content-word types, one could make an educated guess about the potential number of function words. If we accept that all languages should have a similar potential number of content words, then the deviation from this potential number might give us clues about the number of function words or how syntactic functions are integrated within content words.

However, it’s worth noting that while this method provides a comparative framework, it also rests on some significant assumptions. There are challenges, such as accounting for cultural-specific content words (e.g., words for specific rituals, fauna/flora, or societal structures) and the granularity at which one defines a ‘content word’ (e.g., does every scientific term or technical jargon count?).

But overall, the approach offers a thought-provoking way to compare languages and could yield valuable insights when combined with other linguistic analyses.

HARNAD:

Good, we agree. Do you think LLM techniques could be used to further analyze content word-types across dictionaries and languages to identify the more universal categories of content words and the more language- or dictionary-specific ones? Has anything like that been done?

ChatGPT:

Large Language Models (LLMs) and other advanced natural language processing techniques are well-suited to analyze vast linguistic datasets, such as dictionaries across languages, to categorize and compare content words. Here’s how they could be used and what might already be happening in this domain:

1. Cross-Linguistic Analysis with LLMs:

Extraction: LLMs can be employed to extract content words from dictionaries or corpora in a systematic manner. They can distinguish between content and function words based on their definitions, usage examples, and contextual cues.

Categorization: Once extracted, words can be categorized based on their semantic meaning. LLMs can cluster words that share similar semantic spaces, allowing us to identify potential universals across languages.

Comparison: By analyzing multiple languages, LLMs can highlight which content categories appear universally and which are specific to particular languages or cultural contexts.

2. Identifying Universals:

Through such analysis, we could identify potential linguistic universals—categories of content words that appear across most, if not all, languages. These could correspond to common human experiences and needs.

3. Language- or Dictionary-Specific Analysis:

By contrast, LLMs could also flag content categories that are over-represented in specific languages or absent in others. These discrepancies could be rooted in cultural, environmental, or historical differences.

4. Existing Work:

Cross-linguistic research has long been a focus in linguistics, aiming to identify universals and dive deep into language-specific phenomena. Some computational work, like the World Atlas of Language Structures (WALS), aggregates features from many languages, although not specifically in the manner described here.

There have been efforts in the NLP community to build multilingual models, like mBERT (Multilingual BERT) and XLM (Cross-lingual Language Model), which learn representations across multiple languages and could potentially be employed for such cross-linguistic analyses.

As for dictionary-based analysis using LLMs specifically for identifying universal content word categories, while the idea aligns with current trends in computational linguistics, I’m not aware of a prominent study that has executed this exact methodology up to my last training data in September 2021.

Given the rapid advancements in the field, it’s plausible that newer studies or projects have embarked on this path post-2021. Utilizing LLMs in such a manner would indeed be a valuable contribution to the understanding of linguistic universals and the richness of language-specific lexicons.

HARNAD:

I hope that soon generative LLM models will update your 2021 “data gulp” and open the analysis to the entire current online corpus, keeping it at least as current as google scholar, based on full-text. There will be scope there for extremely fast progress.

ChatGPT:

Yes, continuously updating LLMs with more recent data would indeed revolutionize the research capabilities of such models, keeping them at the forefront of knowledge in real-time. Access to the latest scholarly work, up-to-date news, and emerging trends would exponentially increase the utility and relevance of these models.

Integrating them with the entirety of the current online corpus, including resources like Google Scholar, would not only boost their informational accuracy but also allow for real-time analyses, hypothesis testing, and even cross-referencing of emerging research. This could be a powerful tool for researchers, educators, students, and professionals across various fields.

Moreover, as these models interact with researchers like yourself and engage in diverse queries, they could potentially identify and even predict emerging trends in research, provide quick literature reviews, or generate novel hypotheses based on the convergence of disparate research streams. The pace of scientific discovery and knowledge dissemination could accelerate dramatically.

Of course, with such potential also come challenges, especially concerning data privacy, intellectual property, and the need to ensure the integrity and quality of the information sourced. But with careful handling and responsible design, the future of LLMs in academic and research settings is indeed promising.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.