Grammatical Error

We answered before we understood the question. 

The pathways movement’s answer is: skills are discrete, portable, measurable units that are essential to success in a changing labor market and that can be credentialed and matched to employer needs. This post explains why that answer gets the question wrong. Fair warning: our explanation draws extensively on research spanning cognitive psychology, learning science, political science, and sociology. We are aware that this is a lot. So we took our answer to the question and diagrammed it. Doing so inspired glee for Charlotte and unpleasant memories for Kyle. 

Now you have a decision to make. You can keep scrolling past the diagram to read the post straight through, or you can click on each part of the sentence below to learn more and then jump to the parts of the post you care most about. We won’t be offended if you skip the dense parts. (We will, however, notice.) 

Skills SUBJECT are units PREDICATE discrete portable measurable that are essential to success in a changing labor market RESTRICTIVE CLAUSE and that can be credentialed and matched to employer needs DANGLING MODIFIER

Click any part of the sentence to learn more, then follow the link to jump to that section of the blog

The corrected sentence
Skills and dispositions are situated, emergent capabilities developed through practice that prepare young people for future learning across a working life and that enable them to navigate and challenge inequitable systems.

Subject: Skills as the answer

So the story about credentials and degrees turned out to be more complex than we bargained for and rooted in more than a century of unsuccessful efforts to use education to solve structural problems. A reasonable person might wonder if the right move is to focus on the skills underneath the piece of paper. 

There’s a clear logic to reaching for skills as an answer. Employers are looking for skills—we can all see them right there in the job postings. Education systems are set up to credential learners. The credential market may be flooded, but skills are what credentials are supposed to measure, so if we just get better at identifying and assessing skills, the system should work. And focusing on skills could provide a solution to inequities in access to—and the outcomes of—credentials and degrees. Forget the piece of paper: skills seem to center ability over gatekeeping and substance over signaling. 

This logic has made it hard to see that the pathways movement’s relationship to skills has a Hitchhiker’s Guide to the Galaxy problem. In the book, a civilization builds a supercomputer to answer the Ultimate Question of Life, the Universe, and Everything. After seven and a half million years, the answer finally comes back: 42. The computer is unapologetic in the face of (rather understandable) disappointment in its response. The answer is entirely correct. The problem is that nobody understood the question. Rather than figure out themselves what the question actually was, they build a bigger computer. 

It sometimes sounds like the answer to life, the universe, and everything is skills. Skills-based hiring. Skills-first policy. Transferable skills, durable skills, technical skills. Credential inflation? Skills will fix it. AI-driven disruption? Learners and workers need more and different skills. Employer dissatisfaction? Workers lack the right skills. Equity gaps? Skills-based hiring will get past the bias of degrees. We’ve built extensive infrastructure around skills as the answer, putting millions of dollars and years of effort into building bigger computers without pausing to clarify the question—which is why we keep getting 42.  

It’s an answer we’ve reached for ourselves many times. In 2017, one of us (it doesn’t matter which—we’ve each written similar things what feels like 57 million times) explained the operating logic in a conference brief. The interactive graphic on the right shows how much of it now reads as more than a bit cringe given the arguments we’ve been making in this series. The piece wasn’t careless, but it was responsive to structural pressures, including the need to demonstrate employer alignment, to make the case for public investment, and to speak a language that workforce systems recognize. 

Those pressures keep funneling the pathways conversation about skills back into the same grammatical patterns. Our default syntax describes skills as discrete, portable, measurable units. It says we must build assessments, taxonomies, competency frameworks, classification systems, stackable credentials, learning and employment records (LERs), and digital wallets. The goal is to make sure that employers can understand which people have which skills. 

But the grammar of skills doesn’t capture what’s true and important about them. Instead, it treats young people as workers in waiting, each with a bundle of skills to be optimized and made legible to employers. We’re not arguing that skills don’t matter. They do—to employers, obviously, but more importantly, for whether work is meaningful, whether people can adapt, and whether young people develop a sense of purpose and agency while achieving the economic security needed to thrive

Decomposition No. 1 — A Conference Brief, 2017
Decomposition No. 1

A Conference Brief, 2017

Hover over the underlined phrases. (On mobile, tap them.)

What we’re arguing is that the grammar of skills doesn’t match what skills actually are or how they develop. And because the grammar shapes how hiring, credentialing, and training work, it both misrepresents the labor market and actually helps produce the “skills gap” it claims to diagnose and address. This mismatch has real consequences, particularly for the young people furthest from opportunity, who end up bearing the costs of an infrastructure designed for something other than their development. Skills may not be the wrong answer, but we’ve been asking the wrong question. 

Predicate: What our answer gets wrong about skills 

Decades of research across cognitive psychology, learning science, political science, and sociology converge on the same finding: the grammar of skills is based on assumptions that don’t hold up.  

The evidence leads to four key conclusions that contradict what our answer says about what skills are and how they work. You can take our word for it, or you can click on each headline to learn more about the extensive research underneath the conclusions. 

Skills are not tidy, portable units.

The grammar of skills assumes you can break a job into discrete tasks that each require one or more specific skills that can be matched to human capabilities. This is accomplished through skill decomposition, which aims to break down human capability into smaller and smaller units to get to something that can be defined, sequenced, and measured, then added up to predict performance. It’s a foundational instructional design practice, and it underlies everything from competency frameworks to skills taxonomies to credential design. The problem is that cognitive science argues it doesn’t work. What distinguishes experts from novices isn’t a neat little bundle of portable skill units. It’s rich, organized, domain-specific knowledge built through years of practice in specific contexts, in relationship with other practitioners, using the tools and norms of a particular field. Abilities are developing forms of expertise, not fixed properties you can inventory.  

Skills and knowledge can’t be abstracted from the contexts in which they’re learned and used. Competence comes from the interaction of knowledge, context, relationships, and tools. A person doesn’t “have” critical thinking the way they have a credential. Someone thinks critically about something specific, within a framework, using the tools of a particular discipline or community of practice. When students, educators, and employers were asked to define “communication skills,” they didn’t describe one portable ability, they described multiple distinct practices tied to specific situations. If we teach and assess skills in ways that don’t recognize context, there is a very real risk that students will be unable to apply them in real-world situations

Transferring skills across contexts is a tricky business.

A skill learned in one context, whether a high school classroom, an internship, or a credential program, doesn’t reliably transfer to a different one. People who learn to solve problems in one context don’t automatically apply that ability in another, even when the underlying logic is identical. Educators see this all the time. For example, one study showed that computer science students who successfully completed a take-home assignment couldn’t apply the same skills on an exam two weeks later. That’s because the context shifted; it was the same class and the same skills, but a different assessment format. If skill transfer breaks down within a single course, imagine how much harder it is across industries. 

Our strategies depend on what researchers call “far transfer,” meaning the ability to deploy skills across domains, contexts, and time. But the likelihood of this kind of transfer is near zero. Transfer can happen, but it requires deep domain knowledge, practice across multiple contexts, and intentional instructional design. It is fragile, conditional, and far less common than we assume.  

This has direct implications for our approach to skills and credentialing. A credential that certifies someone’s ability to perform a skill in an assessment context can’t reliably validate their ability to deploy that capability in a workplace. The assessment measures a proxy, not the thing itself. And if an assessment asks whether someone “has” a skill, it’s asking the wrong question. The right questions are: What are they ready to learn next? In what contexts can they apply what they know? How, and in what ways, are they developing?

Skills assessments are basically blurry snapshots of something that’s in motion.

Because abilities are developing forms of expertise, the grammar of skills has a measurement problem. Skills assessments evaluate whether someone can demonstrate a skill at the moment of the assessment, in the context of the assessment, and typically in isolation, without access to peers, tools, or resources. (Which is, as the research points out, sort of ridiculous.) Given the importance of context and the challenges of transfer, those aren’t particularly helpful things to measure if what one is trying to learn is whether someone can apply a skill outside of an assessment context or whether someone has the capacity to continue to develop their expertise. 

And the problem goes deeper than assessment design. People don’t just learn about a domain; they become someone within it. Skill development is entangled with identity formation, and the two aren’t separable. A framework that isolates skills from identity, social belonging, and motivation is missing much of what actually makes someone good at something. 

The same skill also looks different at different developmental stages, which means that a framework that asks a yes-or-no question about whether someone has a skill, or tries to rate their proficiency on a scale from one to four, is just a point-in-time measurement of something that is constantly changing. And that snapshot tells you very little about someone’s trajectory.

Skills taxonomies serve systems, not people.

The grammar of skills requires classification systems that name, inventory, and standardize what counts as a skill. These systems are useful to the bureaucracies that power education and hiring. They make the complex reality of human capability into something administratively manageable: tidy units that can be measured, credentialed, and treated as properties of a person. 

But classification systems are built through choices about what counts as a skill, how to define it, and what proficiency looks like. Someone has to decide, and those decisions always involve the loss of information. Classification turns qualities into quantities and makes the complex reality of skills into something that can be bureaucratically managed and administered. The problem is that classification strips away context and the tacit knowledge that lives in relationships, judgment, and experience, making anything that doesn’t fit into the taxonomy invisible. 

This effort to make people legible to credentialing and hiring systems is not the objective, unbiased exercise it appears to be. Part of the appeal of classifying and measuring skills is that it seems highly technical and scientific in ways that suggest it’s removing subjective human judgments. Don’t be fooled. The bias hasn’t disappeared; it’s just been pushed into the design of the taxonomy, where it shows up in things like skill definitions, assessment design, and proficiency thresholds. Which is exactly where it’s hardest to see and challenge. The people being measured against these classification systems experience what sociologists call “torque,” the friction people feel when their lived reality doesn’t fit the categories a system imposes. People whose knowledge was acquired informally, outside credentialing institutions, experience the most torque, because informal knowledge is exactly what classification systems are most likely to miss.

Decomposition No. 2 — A Skills Taxonomy, 2026
Decomposition No. 2

A Skills Taxonomy, 2026

The highlighted words are context. Flip the toggle and watch what’s left.

In context
In a taxonomy
Skills Classification System

The taxonomy’s language was embedded in the description of each skill in practice. Nothing had to be rephrased or rearranged to create the classification system, but the context that made each skill meaningful in a professional setting was stripped out. The grammar of skills doesn’t translate the nuances of situated practice into portable units in the way we’d like to think it does. It just reduces complex capabilities to the least common denominator.

So the grammar of skills doesn’t correspond to how skills actually work, and the project of breaking them down and classifying them isn’t the neutral, technical exercise we thought it was. The interactive graphic on the left shows what happens to durable skills—the ones we say are most likely to transfer across contexts and most critical in a rapidly changing labor market—when they’re restructured so they fit into a taxonomy. The context is literally deleted. 

We talk a lot about preparing young people for their futures, but the tools we’re using try to determine what skills young people have right now, then match them to the skills employers are looking for right now. Even if taxonomies could preserve context and assessments could capture a present state clearly, they’d still be answering the wrong question. The right question is whether a young person is positioned to keep learning, which is both a better description of what employers actually need in a rapidly changing labor market and a better fit for where adolescents are developmentally. 

The research also rather undercuts the claim that skills-based and skills-first strategies are well positioned to help people who are shut out of opportunities to earn degrees. These strategies reproduce the same problem we’ve identified in employer hiring processes: knowledge and skills that are acquired informally go unrecognized. Employers seem to sense that something isn’t working, even if they can’t name it. The most comprehensive study of skills-based hiring to date found that fewer than 1 in 700 hires was affected by firms’ implementation of skills-based hiring. The reason isn’t that employers were insincere or unmotivated; it’s structural. Hiring managers are confused and overwhelmed by the vast credentialing ecosystem and still default to degrees over skills because the alternative—assessing individual capability in context—is slow, expensive, and hard. Yet we keep responding by doubling down on the grammar of skills through credential fluency initiatives, applicant tracking system integrations, and hiring manager training. It looks an awful lot like building that bigger computer.  

Restrictive clause: Which skills count and who gets to decide 

Our 2017 conference brief urged the cultivation of foundational employability skills for a future of work shaped by automation. In 2026, the grammar of skills has a new favorite word: “durable.” We’re again turning to skills as the solution, this time to AI-driven labor market disruption. It feels intuitively right: if technical skills have a shrinking half-life due to rapid technological change, then we should invest in the human capabilities AI can’t replace, like creativity, leadership, collaboration, and adaptability. 

There’s real evidence behind the intuition. The economic returns to technical skills have declined since 2000. Meanwhile, labor-market returns to durable skills have grown, and young people with these skills are more likely to complete bachelor’s degrees. But economists say we need a more complex view of how skills function. David Deming and Mikko Silliman have argued for treating workers not as bundles of skills to be inventoried, but as agents who decide how to allocate their effort across job tasks. This framing puts durable skills and workers’ judgment, context, and adaptation at the center.  

The challenge is that durable skills steer us toward the part of the taxonomy most susceptible to bias, and we don’t talk much about that risk, let alone proactively manage it. The grammar of skills treats skills as neutral properties waiting to be identified and measured. But skills, especially durable skills, are socially constructed—meaning that our understanding of them depends on subjective judgments, which are shaped by social, historical, and cultural contexts, about what “good” looks like and who

meets that standard. The power to determine whether someone has a skill lies with whoever is observing and evaluating that skill, filtered through their own assumptions and biases.

The social construction of durable skills plays out at two key levels: 

What work counts as skilled

Every job requires skills, but only some are considered “high skilled.” That distinction reflects a consensus about which skills we value, not an objective measurement. For example, Black and Latine women are overrepresented in care economy roles (e.g., home health aide) that require listening, empathizing, and managing feelings—skills that are often associated with mothers and coded as feminine. Because these capabilities are seen as “natural” for women, they become invisible in formal classification systems, and many jobs that require them are considered low skilled.

The entire binary between “technical” and “durable” skills is itself a gendered classification, a point that becomes even more obvious if you recall the older language of “hard” and “soft” skills. Skills perceived as masculine are credentialed and compensated; skills perceived as feminine are just assumed. And the categories we’ve defined for durable skills themselves don’t hold up as distinct constructs. One study found that the rapid proliferation of terms for durable skills has led to the creation of at least 40 supposedly distinct categories that overlap so heavily that they are largely incoherent.

Which people count as skilled

This is where the turn to durable skills gets especially risky because of the way racial and gender biases are baked into skill definitions. For example, in the 1950s and 60s, most computer programming was done by women and considered clerical work. When men entered the field in growing numbers starting in the 1970s, the same jobs were reclassified as highly skilled roles requiring degrees. The work didn’t change, but who performed it did. That was all it took to trigger a change in its perceived prestige and skill level. 

Assessments of durable skills in the workplace often function as conduits for racial bias. Standards for “professionalism” make white, middle-class cultural norms the baseline for competence. Employers often describe “skills deficits” in ways that come down to interaction and self-presentation style and decline to hire Black men on that basis. Hiring processes and job descriptions create screening processes that are race- and gender-neutral on their face, but actually introduce bias. When a hiring manager evaluates whether a candidate “communicates effectively” or “demonstrates leadership,” they’re usually assessing whether the candidate communicates and leads like the people already in the room.

The recent turn to a hiring infrastructure powered by algorithms is encoding these biases at scale. AI-mediated hiring tools choose candidates with white-associated names over candidates with Black-associated names over 85% of the time. Amazon scrapped an AI resume-screening tool that was systematically filtering out women. Efforts to address these biases have so far yielded little. For example, New York City passed a first-of-its kind law requiring bias audits for automated hiring tools. It resulted in near-total noncompliance—and the law doesn’t even require employers to stop using a tool if bias is found. We keep getting 42. 

This doesn’t mean we should abandon structured assessment in hiring, given that the alternative to structured assessment is unstructured judgment, network-based hiring, and “cultural fit” evaluations, all of which are demonstrably more inequitable. Structured approaches like work samples and structured interviews are among the strongest predictors of job performance available. (In other words, it’s easier to challenge a hiring manager who says, “I just didn’t get a good vibe” than it is to challenge a seemingly neutral framework.) But the question is what to structure assessment around. And the current answer—discrete, decontextualized skill units drawn from taxonomies that encode the biases they claim to transcend—is not it. 

The recent move toward rebranding durable skills as “human skills,” which is meant to distinguish them from what AI can do, exacerbates the problem by making it harder to see. “Human” implies something universal and innate. There is nothing universal about how these capabilities get defined or assessed. 

When the grammar of skills encodes bias in both the definitions and the assessments, then interventions that don’t directly confront the question of who has the power to define and assess skills will deepen inequities. If we don’t address this, we will end up funneling young people into the most bias-laden part of the skills taxonomy and calling it future-proofing.  

Dangling modifier: How our answer attaches to the wrong thing

Our approach to skills relies on a model of human capability that doesn’t hold up, and we’re now pivoting toward the part of the taxonomy most susceptible to bias. These two problems are connected by a third: the grammar of skills reflects what researchers have called a skills fetish that treats skills as the primary driver of people’s economic outcomes while ignoring employer practices, work organization, and institutional context. If that sounds quite a bit like the tendency toward educationalization that we’ve talked about before, it should. Skills fetishism is educationalization operating at the level of individual workers instead of education systems. And it produces the same kinds of distortions we’ve seen before. 

We’re stuck on a treadmill (or maybe it’s more like a hamster wheel). If skills are the path to economic security, but the skills employers want keep changing—and doing so at a dizzying pace thanks to AI—then the grammar ends up generating perpetual demand for its own products. The skills and credentials people gain expire almost as soon as they’re acquired, so then they have to start over. Microcredentials function as “gig qualifications for a gig economy,” requiring people to perpetually guess what employers will want next and invest accordingly. The infrastructure built around this logic keeps growing: taxonomies, assessments, credentials, digital wallets, LERs, with each producing demand for the next. But it doesn’t produce long-term economic security for the people cycling through it. In fact, the treadmill reinforces the precarious labor-market positions of learners and workers. 

The grammar of skills reframes a problem of employer investment and labor market power as a problem of worker preparation, then builds infrastructure to solve the reframed version.

Just like educationalization, skills fetishism pushes us toward strategies that aren’t aligned with what young people need.

Skills fetishism places responsibility for keeping up with a changing labor market on individuals and education systems instead of asking employers to take on training costs. This leads to a lopsided landscape of investments in infrastructure intended to build individuals’ skills, which generates very little pressure on employers to invest in training, improve wages, or change their hiring and retention strategies. On the contrary, employers have systematically dismantled internal talent development systems over the past four decades, shifting the costs of skill development onto workers and public systems. The grammar of skills reframes a problem of employer investment and labor market power as a problem of worker preparation, then builds infrastructure to solve the reframed version. 

Just like educationalization, skills fetishism pushes us toward strategies that aren’t aligned with what young people need. Designing backward from current employer skill requirements is a questionable strategy in a rapidly changing labor market, and it actively works against what we know about adolescent development. Young people need relationships and space to explore who they’re becoming, not a validated bundle of competencies. The grammar tells them to take responsibility for acquiring the skills employers want right now, and to adapt again when those requirements change. It is, once more, an answer to a question we haven’t paused to clarify.  

There’s also a critical distinction between getting hired and doing meaningful work that our approach obscures. It turns out that skills are actually more important for the latter than the former. College graduates with “practical,” career-oriented majors like engineering get access to a hiring pipeline, including employer connections, clear recruitment channels, and institutional networks, that helps them land entry-level jobs. But liberal arts graduates are actually more likely to use the capabilities developed through their education at work—and therefore to find that work meaningful. Their challenge isn’t skills; it’s institutional access. 

Our approach claims to support the young people with the least institutional access, who have been shut out of opportunities to earn degrees. But what we’re offering them is skills assessments, not the exploration and network-building that actually lead to institutional access. The grammar of skills doesn’t measure relationships or professional social capital, doesn’t credential them, and therefore doesn’t see them. 

The corrected answer: What an answer that understands the question might look like

The research has plenty to say about what the grammar of skills gets wrong. (At this point in the blog, you may be thinking it has entirely too much to say on that topic.) But it also points toward what a revision might look like. We’re not arguing that the infrastructure should be dismantled overnight or that all of it is useless. Some of it is truly useful, but too much of the underlying logic crowds out strategies that center the needs and aspirations of young people. 

The young people entering pathways today will work in jobs that don’t yet exist, using tools that haven’t been invented, in an economy reshaped by forces we can’t fully anticipate. What they need is not today’s skills. It’s the capacity to keep learning.

So what should replace that logic? One alternative comes from research on preparation for future learning (PFL), which reframes the question the grammar asks from “can you apply what you learned right now, with no support?” to “has your prior learning prepared you to learn effectively in new contexts?” PFL asks whether people can orient themselves in new environments, connect new information to what they already know, and adapt as contexts change. These capabilities are needed for success in the workplace. The young people entering pathways today will work in jobs that don’t yet exist, using tools that haven’t been invented, in an economy reshaped by forces we can’t fully anticipate. What they need is not today’s skills. It’s the capacity to keep learning. 

A dispositional framework pushes the question even further. A disposition isn’t a skill you have or don’t have. It has three components: 1) inclination, the tendency to actually use a capability; 2) sensitivity to occasion, the ability to recognize when a situation calls for it; and 3) ability, the capacity to follow through. (In this post, we’ve demonstrated that we have the inclination and ability to apply what we learned in grad school about conducting lit reviews and performing close readings of texts to a

pathways blog. Whether we’ve demonstrated sensitivity to occasion is perhaps an open question.) The grammar of skills measures ability alone. It says nothing about whether someone is inclined to use what they know or can recognize when a situation calls for it. A dispositional framework won’t solve the bias problem. Dispositions are culturally inflected, and who gets identified as “critical” or “curious” will reflect the same assumptions as durable skills do. That problem requires direct, separate attention. 

But focusing on dispositions changes what we design for. We can’t just swap in a disposition taxonomy for a skills taxonomy. Developing a disposition means helping young people practice noticing when a capability is relevant, experience the value of deploying it, and build the inclination to bring it to bear in unfamiliar situations. That means pathways should: 

  • Treat exploration and exposure as essential steps toward forming dispositions. A disposition doesn’t develop through a single experience. It forms when young people practice a capability across varied contexts, so they develop a sense of when it’s relevant and an inclination to use it without being prompted. 
  • Explicitly attend to metacognition, the capacity to reflect on one’s own thinking and learning. A dispositional framework would encourage young people to monitor their own reasoning, recognize when a strategy isn’t working and shift approaches, and identify what they don’t yet know. The grammar of skills asks, “Can you do this?” Metacognition asks, “Do you know what you’re doing and why?” 
  • Measure outcomes a skills framework can’t capture. Those include whether young people recognize what situations call for what capabilities, whether they deploy those capabilities proactively rather than only when prompted, and whether they’re prepared for future learning, including connecting new information to what they know so they can adapt and succeed in new contexts. 

Together with PFL, a dispositional orientation can set young people up for success over the long term in a rapidly changing labor market. They also point to three key strategies for pathways design. 

1. Design for development, not skill validation.  

If capability is situated, emergent, and entangled with identity, then pathways need to create the conditions under which development actually occurs: varied contexts, authentic problems, meaningful relationships, reflection, and the freedom to explore and correct course over time. 

Existing pathways strategies like work-based learning and mentoring are the right structural elements, but only if they’re designed as developmental experiences, not as vehicles for skills validation. The difference is in the orientation. Are we asking what skills this experience will help a student demonstrate on a credential assessment? Or are we asking what this experience will help a student learn, explore, and become ready to learn next? The first question pushes toward standardized programs focused on measurable outputs. The second pushes toward rich, varied, relational experiences centered on growth. 

This also means treating exploration as a core feature of pathways, not a luxury. Developing a disposition requires practicing a capability across varied contexts. A young person who’s been in one work setting pursuing one credential has one data point. That is not enough to form a disposition or to figure out who you’re becoming. The outcomes we look for from work-based learning should focus less on whether a student checked technical and durable skills off a list and more on whether they developed new ways of thinking about problems, formed relationships that persist, tried on a professional identity, and built capacity to learn in new contexts. 

Decomposition No. 3 — A Skills Evaluation Rubric, 2026
Decomposition No. 3

A Skills Evaluation Rubric, 2026

You’ve probably seen a rubric very much like this one before. It’s the kind of form thousands of WBL programs use every year. Click on each skill to see what it measures, what it misses, and what it incentivizes.

Work-Based Learning — Supervisor Skills Evaluation
Student: _________________
Date: _________________
Placement: _________________
Supervisor: _________________
The design question

The rubric asks the same question about every skill it lists: whether a student can perform a behavior, in isolation, at a point in time. It can tell us, with considerable precision, whether a student scored a 3 or a 4 on “communication.” It can’t tell us whether a young person is becoming someone who knows how to read a room, when to push back, and how to build trust and relationships. We are precise about all the wrong things.

What if the rubric measured dispositions, relationships, and readiness to learn?
Did the student encounter ways of thinking about problems they hadn’t seen before?
Did they form relationships with adults who see their potential and will persist in their lives?
Did they try on a professional identity, and did that identity expand or narrow their sense of what’s possible?
Are they better prepared to learn in the next context they enter?
Did the experience connect them to people and networks they wouldn’t otherwise access?
2. Hold systems accountable for the experiences they provide. 

We are not against accountability or assessment. We are for accountability designed around what capability actually is and how it develops. That means measuring whether systems create the conditions for rich learning, including authentic contexts, varied exposure, developmental relationships, and metacognitive practice, and disaggregating the data to ensure we’re supporting the young people furthest from opportunity. 

This is a harder accountability challenge than measuring credential completion. But it’s the right one. The capabilities that predict postsecondary success are exactly those that resist standardized measurement. Our current approach to assessment is precise about all the wrong things. Sure, we can tell you how many students earned a given credential or whether they demonstrated a specific skill in an assessment context. We can’t, however, say much about how many learners are developing the dispositions, networks, and adaptive capacity that will allow them to thrive across a working life. We should be demanding both kinds of measures, and holding ourselves—and our systems—accountable to both.

3. Make systems legible to young people instead of making young people legible to systems. 

The grammar of skills is organized around making learners and workers readable by education systems and labor markets. But there is a learner-centered alternative: equipping young people to read the system, understand how skills get defined and whose knowledge counts, and exercise agency within and against those structures. 

We shouldn’t just be handing young people lists of durable skills and telling them these are what employers want. We should be giving youth the tools to analyze those lists and ask who defined those capabilities, what got lost and who got left out in the process, and whose interests the definitions serve. A young person who understands the political economy of skill classifications is far better positioned to exercise real choice and agency than one who has simply tried to match a taxonomy someone else designed. Young people’s need for those conceptual and analytical tools is part of why we have argued before for the importance of the liberal arts in pathways. The goal isn’t humanities-as-durable-skills development through history courses that teach critical thinking the way a training program teaches welding. The goal is to equip young people with the tools and frameworks needed to understand and navigate—and challenge—the systems they’re entering. 

Ethnographies of Work (EOW) offers one example of what this looks like in practice. Originally designed as a sociology course at Guttman Community College, EOW asks students to use ethnographic methods to analyze workplace dynamics such as racialized professional norms, cultural matching in hiring, and how professional social capital operates in hiring and career advancement. It equips young people to critically analyze how skills get defined and to engage with the labor market as a system they can understand and challenge, not just a set of requirements for them to meet. EOW has been replicated across multiple disciplines at Bunker Hill Community College and could be adapted at the high school level or as a dual enrollment course. EOW is concrete, scalable, and exactly the kind of strategy the grammar of skills can’t see—because it produces a capability, not a credential. 

Period: Where we reach our conclusion 

Still looking for friends

We’ve had some great conversations with folks who read our last post, and we’d love to have more. If you want to talk about the grammar of skills—or just tell us we used the word “grammar” too many times in this post—we’re here for it. You can schedule time to talk with us whenever it’s convenient for you.

There’s a pattern here, and it’s worth naming plainly. Skills-based hiring is presented as an upgrade from credential-based hiring, but it replaces one proxy employers don’t much understand or trust with another. Durable skills are presented as an upgrade from technical skills for young people who will need to adapt to a changing labor market, but they steer us toward the part of the taxonomy most saturated with bias. Algorithmic assessment is presented as an upgrade from human judgment, but it encodes that judgment at scale and makes it harder to see. Each iteration that claims to offer an improvement to our systems ends up constrained by the grammar of skills.  

That gap between what we know and what we build isn’t a mystery. The research has been perfectly clear for decades, but the grammar of skills is remarkably resistant to the evidence and the nuance it demands. That’s because the grammar is the product of structural incentives  

that are powerful and real. Employer expectations, the need for measurable outcomes, and the political appeal of “skills” as a seemingly neutral frame don’t disappear just because we acknowledge them. But naming them is how we start designing something that doesn’t simply reproduce them.

We started by saying that skills may not be the wrong answer, but we’ve been asking the wrong question. The question isn’t “what skills do young people need?” It’s “what conditions do young people need in order to develop the capabilities, dispositions, relationships, and adaptive capacity that will carry them across a working life?” The grammar of skills can’t ask that question because it’s organized around making people legible to systems rather than making systems work for people. Answering it requires designing for development rather than measurement, holding systems accountable for conditions rather than holding individuals accountable for skills, and equipping young people to read the system rather than just perform within it. None of that is easy, and we’re not pretending to have it all figured out. But we’d rather be trying to answer the right question than building a bigger computer.


This post is part of All4Ed’s Normal Gets Us Nowhere series, which seeks to ask hard questions, spotlight fresh data and thinking, challenge longstanding assumptions, and offer new approaches that go beyond tinkering in order to contribute to the development of the next generation of pathways strategies. We don’t have all the answers about the right approach, and we are committed to working with both long-time pathways leaders and those new to the conversation to identify and test new ideas and strategies. If you’re working to build better pathways systems, we’d love to learn more and think about how we can work together, so please get in touch.  

Stay in the loop

Our Normal Gets Us Nowhere blog series about all things pathways is just getting started. Sign up to get future posts and updates on how All4Ed is working with leaders across the country to build something better.

Meet The Authors


Charlotte Cahill
Senior Advisor


Kyle Hartung
Senior Advisor

Click here to go back to the blog series’ landing page.