
Meaning is not something we find.
It's something we make.
My research blends linguistics, artificial intelligence, and narrative theory
to understand how we create meaning through language.
Threads of Inquiry
Language does not come neatly divided. It couldn't. Language expresses meaning — and meaning arises from threads that twist, tangle, and twine. That we can channel something so wild into shared signs — sounds, gestures, scribbles — is one of our most astounding achievements.
I don't believe we can study such an achievement by carving it into clean categories. To understand meaning and language, we have to enter the weave ourselves — and re-create the very mystery we seek to unravel. What follows is therefore not a taxonomy, but a tapestry: a set of entry points into my research, where each thread is interwoven with the rest. Pull one, and patterns emerge elsewhere.
These conceptual threads don't run in straight lines. If you're unsure about which thread to pull first, I suggest starting with the constructional idea; which explores how language weaves meaning into form.

How do we weave meaning into form?

How do we cultivate a model of language?
Fluid Construction Grammar

How do we make sense of syntax
Semantic Interpretation

How do we make sense through narrative?
How do we weave meaning into form?

For much of modern linguistics, form and meaning have lived in separate worlds: one of syntactic rules, and the other of meaning. They touch, but only at the interfaces: syntax first composes well-formed structures, which are then handed off to a separate system for semantic interpretation.
But what if grammar isn't an autonomous system at all, but already steeped in meaning — down to its smallest patterns? This was Charles Fillmore's dangerous idea: that all of language can be described as mappings between form and function called constructions. 'Dangerous' not because it is rejected by scientists who hold rivalling views, but because it even meets resistance from the scholars who are inspired by it — including some of Fillmore's closest collaborators.
In my work, I explore what makes Fillmore's constructional idea so powerful, and how it could forever change the study of language — if only we are willing to take it to its logical conclusion. Because constructions aren't just patterns: they come with conditions for use and interpretation. Like local experts, they autonomously know when a pattern applies, how it functions in context, and how it connects to the rest of the system. Once we take that seriously, nothing about language study looks quite the same.
In the publications below, I explore some of the consequences of the constructional idea, not just theoretically, but also computationally. That's why I work with Fluid Construction Grammar: to formalize Construction Grammar in a way that respects its conceptual depth. If you're curious about that formalism, you can also explore this thread dedicated to that work.
Selected Threads
- van Trijp, Remi (2024). "Nostalgia for the Future of
Construction Grammar." Constructions and Frames, 16(2):311—345.
An invited contribution to a special issue on the future of Construction Grammar. Argues that Construction Grammar offers a uniquely powerful lens for studying language as a complex adaptive system. [abstract] [BibTeX]Construction Grammar is a nomadic family of theoretical approaches whose members are constantly moving in various directions. The diversity in construction-based approaches is a clear sign of a thriving and tolerant research community, but it also risks muddying the waters, leading to potential confusion. In this paper, I argue that the main source of confusion about Construction Grammar stems from the community’s gradual evolution from the traditional view of languages as static, idealized entities (the “aggregate” perspective) to the view of language as a complex adaptive system (the “population” perspective). While the aggregate perspective abstracts away as much as possible from variation and language usage, the population perspective greatly emphasizes the dynamics of language and situated communicative interactions. This paper illustrates what it means to perform constructional analyses from the population perspective; and argues that Construction Grammar is particularly well-positioned to lead the way in this new kind of linguistics, indicating that our community has a bright future ahead.@article{vantrijp_nostalgia_2024, author = {{van Trijp}, Remi}, year = {2024}} title = {Nostalgia for the Future of Construction Grammar}, journal = {Constructions and Frames}, volume = {16}, number = {2}, pages = {311--345}, doi = {https://doi.org/10.1075/cf.23013.van},
- van Trijp, Remi (2020). "Making Good on a Promise:
Multidimensional Constructions." Belgian Journal of Linguistics, 34(1):357—370.
Reimagines constructions not as rebranded rules, but as dynamic, multidimensional representations capable of bridging theory and description. Offers a formal account that makes no prior assumptions about grammatical architecture, substantiated through a computational model of German field topology in Fluid Construction Grammar. [abstract] [BibTeX]Construction Grammar was founded on the promise of maximal empirical coverage without compromising on formal precision. Its main claim is that all linguistic knowledge can be represented as constructions, similar to the notion of constructions from traditional grammars. As such, Construction Grammar may finally reconcile the needs of descriptive and theoretical linguistics by establishing a common ground between them. Unfortunately, while the construction grammar community has developed a sophisticated understanding of what a construction is supposed to be, many critics still believe that a construction is simply a new jacket for traditional linguistic analyses and therefore inherits all of the problems of those analyses. The goal of this article is to refute such criticisms by showing how constructions can be formalized as open-ended and multidimensional linguistic representations that make no prior assumptions about the structure of a language. While this article’s proposal can be simply written down in a pen-and-paper style, it verifies the validity of its approach through a computational implementation of German field topology in Fluid Construction Grammar.@article{van_trijp_making_2020, author = {{van Trijp}, Remi}, doi = {https://doi.org/10.1075/bjl.00059.tri}, journal = {Belgian Journal of Linguistics}, pages = {357--370}, title = {Making {Good} on a {Promise}: {Multidimensional} {Constructions}}, volume = {34}, year = {2020}}
- van Trijp, Remi (2016). "Chopping Down the
Syntax Tree. What Constructions Can Do Instead." Belgian Journal of Linguistics, 30(1):15—38.
Challenges the need for special extraction rules in syntax by arguing that the real problem lies in the limitations of tree-based representations. Proposes constructions as a more expressive alternative — one that naturally captures word order variation and non-local dependencies. A computational implementation in Fluid Construction Grammar demonstrates how this rethinking of grammatical architecture opens new paths for modeling both comprehension and production. [abstract] [BibTeX]Word order, argument structure and unbounded dependencies are among the most important topics in linguistics because they touch upon the core of the syntax-semantics interface. One question is whether “marked” word order patterns, such as The man I talked to vs. I talked to the man, require special treatment by the grammar or not. Mainstream linguistics answers this question affirmatively: in the marked order, some mechanism is necessary for “extracting” the man from its original argument position, and a special placement rule (e.g. topicalization) is needed for putting the constituent in clause-preceding position. This paper takes an opposing view and argues that such formal complexity is only required for analyses that are based on syntactic trees. A tree is a rigid data structure that only allows information to be shared between local nodes, hence it is inadequate for non-local dependencies and can only allow restricted word order variations. A construction, on the other hand, offers a more powerful representation device that allows word order variations – even unbounded dependencies – to be analyzed as the side-effect of how language users combine the same rules in different ways in order to satisfy their communicative needs. This claim is substantiated through a computational implementation of English argument structure constructions in Fluid Construction Grammar that can handle both comprehension and formulation.@article{vantrijp_chopping_2016, author = {{van Trijp}, Remi}, title = {Chopping Down the Syntax Tree. What Constructions Can Do Instead}, journal = {Belgian Journal of Linguistics}, year = {2016}, volume = {30}, number = {1}, pages = {15--38}, doi = {https://doi.org/10.1075/bjl.30.02van} }
- van Trijp, Remi (2015). "Cognitive vs. Generative Construction Grammar:
The Case of Coercion and Argument Structure." Cognitive Linguistics, 26(4):613—632.
Shows how debates about argument structure mask a deeper divide within Construction Grammar: is grammar a system for generating sentences or for solving communicative problems? Offers a computational case study in Fluid Construction Grammar. [abstract] [BibTeX]One of the most salient hallmarks of construction grammar is its approach to argument structure and coercion: rather than positing many different verb senses in the lexicon, the same lexical construction may freely interact with multiple argument structure constructions. This view has however been criticized from within the construction grammar movement for leading to overgeneration. This paper argues that this criticism falls flat for two reasons: (1) lexicalism, which is the alternative solution proposed by the critics, has already been proven to overgenerate itself, and (2) the argument of overgeneration becomes void if grammar is implemented as a problem-solving model rather than as a generative competence model; a claim that the paper substantiates through a computational operationalization of argument structure and coercion in Fluid Construction Grammar. The paper thus shows that the current debate on argument structure is hiding a much more fundamental rift between practitioners of construction grammar that touches upon the role of grammar itself.@article{van_trijp_cognitive_2015, author = {van Trijp, Remi}, year = {2015}, title = {Cognitive {Vs}. {Generative} {Construction} {Grammar}: {The} {Case} of {Coercion} and {Argument} {Structure}}, journal = {Cognitive Linguistics}, volume = {26}, number = {4}, pages = {613--632}, doi = {https://doi.org/10.1515/cog-2014-0074}, }
Back to Threads | Fluid Construction Grammar | What brings me here
How do we cultivate a model of language?

Language exists in the wild. It emerges from the interactions of language users negotiating shared worlds — unplanned, unfinished, radically adaptive. But to understand that system scientifically, we need spaces where ideas can grow under conditions we can observe and share.
A formalism doesn’t replace the wild. It creates a cultivated ecosystem: a kind of community garden where constructional dynamics can be planted, tested, pruned, and evolved.
Fluid Construction Grammar (FCG) is one such ecosystem. It didn't emerge fully formed. It has grown through the work of many researchers — shaped by evolving needs, tested in implementation, and adapted through experience. I've made my share of contributions to its development, but it has always been shared soil: tended by many, and open to new growth.
If you want a quick visual introduction, here's a short video explaining why we use FCG for deep modeling. Language is extraordinarily complex — not just in form, but in what we express and understand through it. Most technologies skim the surface; this video explains why we need formal tools like FCG if we want to engage truly with the richness of meaning. The video was recorded before the explosion of large language models, but its core message remains: if we care about interpretation, transparency, and cognitive insight, we need to dig deeper than observable form.
By making FCG an open-source platform, we are inviting you into that garden. To help give form to the constructional idea — not by encoding a static theory, but by cultivating a living one.
If you want to get your hands dirty, you can download FCG at this website. If you want to read more, please check out this article:
- van Trijp, Remi and
Beuls, Katrien and
Van Eecke, Paul (2022).
"The FCG Editor: An innovative environment for engineering computational construction grammars." PLOS One, 17(6):e0269708.
[abstract] [BibTeX]Since its inception in the mid-eighties, the field of construction grammar has been steadily growing and constructionist approaches to language have by now become a mainstream paradigm for linguistic research. While the construction grammar community has traditionally focused on theoretical, experimental and corpus-based research, the importance of computational methodologies is now rapidly increasing. This movement has led to the establishment of a number of exploratory computational construction grammar formalisms, which facilitate the implementation of construction grammars, as well as their use for language processing purposes. Yet, implementing large grammars using these formalisms still remains a challenging task, partly due to a lack of powerful and user-friendly tools for computational construction grammar engineering. In order to overcome this obstacle, this paper introduces the FCG Editor, a dedicated and innovative integrated development environment for the Fluid Construction Grammar formalism. Offering a straightforward installation and a user-friendly, interactive interface, the FCG Editor is an accessible, yet powerful tool for construction grammarians who wish to operationalise their construction grammar insights and analyses in order to computationally verify them, corroborate them with corpus data, or integrate them in language technology applications.@article{van_trijp_fcg_2022, author = {{van Trijp}, Remi and Beuls, Katrien and {Van Eecke}, Paul}, doi = {10.1371/journal.pone.0269708}, title = {The {FCG} {Editor}: {An} innovative environment for engineering computational construction grammars}, journal = {PLOS ONE}, volume = {17}, number = {6}, pages = {e0269708}, doi = {https://doi.org/10.1371/journal.pone.0269708}, year = {2022}}
Back to Threads | Making Sense of Syntax | What brings me here
What Brings Me Here

On a personal website, you might expect the usual "about me":
Hi, I'm Remi van Trijp. I'm Research Leader at the
Sony Computer Science Laboratories, Paris Lab.
That sort of thing.
But just as meaning isn't a static essence, I think it's more interesting to understand a person through their trajectory — what moves them, what they've questioned, where they're heading — rather than a snapshot of what they (think they) are at any given moment. So allow me to tell you a story about what brought me here, and where I'm hoping to go next.
Finding My Way (2001—2005)
Back in my early twenties at the University of Antwerp, one of my professors told me: "Your stylistics is better than your linguistics." He meant it as a compliment — he really liked one of my essays — but at the same time it felt like a kind warning to major in literature instead. In hindsight, it's probably the most accurate review I've ever received, because I was indeed more interested in how language evokes meaning and emotion than in dissecting it into tree diagrams.
And yet, a year later I found myself disregarding his advice. I loved literature, but I had always wanted to do computer science as well. One seduced me with metaphors and narratives, the other with systems, structure and the thrill of building things.
Computational linguistics seemed the closest thing to both — so I signed up for the linguistics programme.
I remember stepping into the office of Walter Daelemans, a pioneer in Natural Language Processing, on the very same day that Lernout & Hauspie — Belgium's greatest language-tech hope — was declared bankrupt. Walter told me that my chances of building a career in language technology, at least in Belgium, had just greatly dimmed. But I was too young and naive to have thought about the money — let alone a career.
In Walter's introductory class to Artificial Intelligence, I first encountered the Talking Heads experiment of Luc Steels — and it instantly changed my life. Everything clicked. Luc's robotic agents weren't just parsing text: they were truly communicating about the world. They invented words, negotiated meaning, failed gracefully at first but inevitably self-organized their community language. The Talking Heads Experiment wasn't just clever. It felt alive.
That moment opened a new vision of language to me. Not as a formal system of rules, but as a living ecology of form and meaning. And I wanted to understand how that ecology works.
From Friction to Flow (2005—2020)
Then I got lucky. Very, very lucky. After graduation, and with the local language-tech bubble burst, I worked for some time as a journalist and as a language teacher in secondary school. But then I learned from Walter that Luc Steels was looking for new people. He had already hired roboticists and computer scientists, and now he was scouting for a linguist who wasn't afraid of computers. And those were still in short supply back then. Luc invited me to his lab in Brussels, and after what must have been the worst job interview ever (I really thought I was there just to visit), I landed a position in Paris where he directed the European branch of the Sony Computer Science Laboratories.
Luc hired me on a three-year contract for a European project. More than twenty years later, I'm still there. At some point, I stopped being a guest and started being part of the furniture.
Like many linguists in a computational world, I felt completely useless at first. My colleagues were building sophisticated systems and running simulations. I could barely get a sentence parsed. It took me six months to produce anything that didn't break. But I dug in. Learned that as a linguist, I had a different perspective. And over time, I found what I could contribute.
I helped develop Fluid Construction Grammar (FCG) and the multi-agent framework Babel it is embedded in. More a musician than an instrument maker, I kept pushing at the edges of our formalisms to see which sounds I could coax from them. If you came across FCG in the mid-2010s and found it quirky, I probably had something to do with it. My colleagues later did a tremendous job at professionalizing the software — most of the stranger decisions got refactored into more elegant code. But if you look deep enough under the hood, you can still find some of my fingerprints.
I had my reasons. Probably.
I also became a Master of Puppets. For my PhD, I conducted multi-agent experiments about the emergence of a grammar for describing argument structure — who does what to whom. My agents would observe actions in a puppet theatre and develop a language for talking about those events.
To understand how such grammars function, I began using FCG for reverse-engineering how real languages solve the same problem. It was an exhilarating time: no one had actually implemented a Construction Grammar approach to natural language that worked for both production and comprehension. The community was brimming with innovative ideas, but without a working model, Construction Grammar was often dismissed as linguistics on holiday.
I'd like to think I helped prove that feeling wrong.
The Narrative Turn (2020—now)
In my recent work, I became interested in what is perhaps our most powerful cognitive tool for making sense of the world and each other: narrative.
Narratives shape how we interpret, organize and assign meaning to events. We weave stories instinctively, turning fragmented experiences into coherent structures, causal explanations and temporal frames.
This shift of perspective has, again, profoundly changed how I think about language, Artificial Intelligence, and even about the nature of science itself.
Each sentence, for instance, is a mini-narrative in its own right — mirroring the larger discourse structures in which it is embedded. And every language user has this talent of creating cinematic experiences. If I say the cat chased the mouse, I am directing a fast-paced action flick. But if I say the mouse was chased by the cat, we flip the camera to the mouse, and suddenly we are in a horror film — watching a poor rodent try to escape from a looming threat.
In both cases, the same story has inspired two entirely different blockbusters — each shaped by the spin the Active and Passive constructions put on the event. How we go from the story (a conceptual scene) to a linguistic movie's final edit (the sentence) — and why we choose one framing over the other — is not just a matter of syntax. It's a lens into cognition, perspective-taking, and communicative intent.
This matters for not just for the study of language, but for Artificial Intelligence and technology as well. Meaning isn't just what a sentence says; it's what it does. The same event can be framed as action, victimhood, achievement, or tragedy. And humans are exquisitely sensitive to these framings: we infer intentions, assign blame, shift sympathies, or reshape decisions based on subtle grammatical cues. An AI system that overlooks this won't just sound robotic: it will never understand how people construct shared realities.
I believe this is why the scientific study of language is more relevant than ever — even in a world where large language models seem to become evermore powerful. If we want AI to truly center on improving people's daily lives and benefit society, it has to move beyond data-driven generation and into intention-driven production and comprehension. It needs to recognize not just what was said, but also how it was framed: what was left out, what was foregrounded, and what kind of story the language user is inviting you to inhabit. This is more than a linguistic or technological challenge. It is a socio-cognitive one. And a deeply human one.
Narratives are also indispensible for the scientific enterprise. In recent years, I've become aware of a tension that many researchers now feel. On one side, the traditional image of science as the quest for objective truth: systematic, generalizable, detached. On the other, deconstructivism, which dismantles foundational assumptions, questions categories, and exposes the hidden politics of objectivity itself. These two visions seem to pull in opposite directions: one building towards control, the other unraveling towards epistemic humility.
Narrative-based explanations may offer a way to reconcile them. Science is often tasked with explaining non-recurrent phenomena on the basis of evidence that is fragmentary, indirect, and incomplete. In such contexts, narratives aren't optional; they're integral to how plausible, coherent and causally structured explanations are constructed. Longtime dismissed as "just-so stories" — intuitive, post-hoc reconstructions rather than genuine vehicles of knowledge — narratives are increasingly recognized for their role in epistemic knowledge-making.
In my work, I try not only to study narrative; but to embody it. My writing style has become increasingly narrative-shaped — a modest attempt of trying to map scientific thinking, and to explore how that thinking unfolds in the same way that all meaning does: through interaction, the meeting of minds, and situated context.
I don't expect my experiment to persuade everyone, but I do hope it's not lukewarm. May it leave you hot or cold, intrigued or infuriated — but never indifferent.
This is where I am right now. But ask me again in five years. I'll have a different story to tell.
From the Loom:
My Published Work

I’ll be uploading selected papers and writings here soon.
In the meantime, you can find my work on Google Scholar.
Let's Connect
I welcome thoughtful collaborations, questions, or reflections. You can write to me at remi.vantrijp@sony.com