Skip to main content

Paper Is Dead, Hardcore Research Must Rise

· 12 min read
DingZhiyu
Southwest Petroleum University
Listen to article00:00 / 00:00

On Thursday, April 2, 2026, 19:30 - 21:00, I attended a talk by Professor Qiang Xu from the Department of Computer Science and Engineering at The Chinese University of Hong Kong. The title was sharp: Paper Is Dead, Hardcore Research Must Rise: Self-cultivation For Researchers In The AI Era.

"Paper is dead" is of course a deliberately provocative phrase. But what it really pierces is not whether papers have value, but a deeper question: when AI is systematically rewriting the research workflow, what should researchers use to prove their value? And what should count as the real deliverable of future research?

After listening to the full talk, my strongest feeling was that the AI era has not made research easier. Instead, it has made the question of what truly counts as research much clearer.

When Papers Are No Longer The Only Endpoint Of Research

For a long time, papers were almost the most typical representation of research output. Whether a study "stood" often depended on whether it could be written into a paper and published at a sufficiently good conference or journal. Papers are of course important. But if the endpoint of research is completely equated with publication, that paradigm is now being rapidly challenged by the AI era.

Professor Xu repeatedly emphasized that research outputs with real vitality in the future should not only be papers. They should also include tools, APIs, benchmarks, ground truth, and other infrastructure that Agents can call. In other words, the value of research is no longer reflected only in citations. It is also reflected in whether the work truly solves real needs, whether it can be called, reused, extended, and eventually enter a long-running technical ecosystem.

This is, in essence, pulling research back from "writing it out" to "building it out."

Around this point, the talk also mentioned an evaluation framework worth remembering: ROSE. A good research work should solve a real need, be original, be significant, and retain a certain elegance. Compared with asking "where was this paper published," this standard is obviously harder, but also closer to research itself.

Because truly good research does not necessarily stop at a paper. It is more likely to continue becoming tools, systems, and methods, and to keep living inside later work.

The Stronger Agents Become, The More They Need A Regulatory Kernel

The second main thread of the talk was system governance in the Agent era.

Today, when many people talk about Agents, they focus more on the expanding boundary of capability: Agents can understand tasks, decompose workflows, call tools, and in some scenarios even show signs of near-autonomous action. But the more this is true, the harder it is to avoid one question: if Agents become increasingly powerful, who ensures that their final behavior remains safe, reliable, and controllable?

Professor Xu's answer was a "regulatory kernel."

This concept is very interesting. It is not simply adding a few rules to a model. Rather, it tries to build a constraint structure between the large language model and the external world, somewhat like an operating system. The large model is responsible for open-ended semantic output, for proposing plans and creating possibilities. The regulatory kernel maps those open semantics into deterministic instructions that can be inspected, constrained, and executed, and then uses boundary checks and rule interception to ensure that the system does not cross lines or lose control.

In one sentence: the LLM thinks; the kernel guards.

Behind this lies a crucial cognitive shift. A mature Agent system cannot be built only by stacking "smarter" models. It must have a verifiable and governable underlying mechanism. System design in the AI era is not only a question of capability. It is also a question of boundaries.

AI Levels Techniques, Not Research Ability Itself

The point I agreed with most in the entire talk was that it did not treat AI simply as an "efficiency tool." It went further and asked: when tools become stronger and stronger, what remains irreplaceable in a researcher?

Professor Xu's judgment was clear: AI will indeed flatten many differences at the skill level, but it will not flatten research ability itself. What automation more easily replaces is repetitive labor, low-level "paper-churning," and procedural work built only on proficiency. What remains difficult to replace is insight into problems, judgment about directions, the ability to distinguish reliable information from falsehood, and the ability to move a vague problem toward a verifiable result.

This means that research training in the AI era has changed.

In the future, more important than "can you use tools" will be: can you propose a question worth studying? Can you communicate with AI at a high level? Can you judge which results are genuinely effective and which are merely hallucinations that look like answers? Can you push a piece of work from an idea into an outcome that can truly be reused, communicated, and deployed?

The tools are getting stronger, but the abilities that decide the ceiling are becoming more fundamental.

The Q&A Made The Researcher's "Self-cultivation" Clearer

Several points from the Q&A after the talk are worth recording separately.

Someone asked whether AI would make the gap between people larger, especially whether those already good at organizing, judging, and leading would pull further ahead with AI support.

Professor Xu's answer was not pessimistic. He did not think research ability would become permanently locked in the hands of a few "naturally gifted" people. Large models may well amplify efficiency gaps. But research thinking itself can be trained. How to sense the value of a problem, how to judge whether information is reliable, and how to organize one's research process are not mysterious talents that cannot be learned. They are abilities that can be built through long-term training.

He also mentioned that when reading papers, the most important thing is not only to see what conclusion the authors reached, but to ask: how did they get onto this path? What a mature researcher truly needs to cultivate is not only a store of knowledge, but research taste and judgment. You do not only need to "understand a paper." You need to see how a problem was discovered, defined, advanced, and solved.

This is very important. AI can help you summarize papers faster, extract information, and generate ideas. But if you lack judgment, you can easily be led around by content that is fluent on the surface but empty underneath. The stronger the tools become, the scarcer judgment becomes.

In The AI Era, Education Should Not Mainly Train "People Who Can Solve Problems On Tests"

The talk also included a discussion about education that left a deep impression on me.

With the help of AI, knowledge acquisition and skill learning will become increasingly democratized. Many things that once required long accumulation are now becoming easier to reach and easier to call upon. But this does not mean education will become lighter. On the contrary, it may become harder.

Because when knowledge itself becomes easier to obtain, what becomes scarce is no longer how much content you remember, but how you judge what is worth learning, what is worth doing, and what is worth believing. What truly matters in the future may not be only teaching students to master knowledge, but cultivating judgment, the ability to choose, critical thinking, and the ability to form a sense of problems around real interests.

To put it more directly, AI can become an executor, but only if you first are a person with a sense of direction. If someone has no problem consciousness, stronger tools will only let them spin in place faster.

Closing

So what "Paper is dead" really means is not that papers are useless, but that the era in which papers alone serve as the endpoint of research is ending.

Real "hardcore research" in the AI era asks researchers, on one hand, to produce outputs closer to real-world needs, leaving deliverables that can be reused, called, and incorporated into systems. On the other hand, it also asks researchers to cultivate again those most basic and most crucial abilities: problem consciousness, judgment, research taste, and the ability to work with new tools.

The tool revolution has already happened. What will really decide how far a researcher can go next may not be whether they can use AI, but whether, in the AI era, they can still see which problems are truly worth solving.

Appendix: Original Meeting Notes

The following preserves my note-style record from the talk for reference.

Summary

This session centered on changes to research paradigms in the AI era. It discussed the reshaping of research deliverables, the regulatory kernel in the Agent era, and core questions around research value, AI tool use, and educational models.

1. Reshaping Research Paradigms And Deliverables

The core value of research should not be limited to paper publication and citations. It should also be reflected in whether the work can be called by AI Agents and generate real social impact.

Future research deliverables should include infrastructure such as tools, APIs, benchmarks, or ground truth that Agents can call, ensuring the long-term vitality and ecological position of the work.

The "ROSE rule" was proposed as a new standard for evaluating research: solve a real need (R), possess originality (O), have significance (S), and demonstrate elegance (E).

2. Regulatory Kernel In The Agent Era: OS System

To address uncertainty, uncontrollability, and potential risks in Agent systems, the talk proposed building a "regulatory kernel" (OS) as the Agent's "rein."

This kernel maps semantic output from large language models into deterministic instruction sets, combines them with rules for boundary checking and interception, and forms a neuro-symbolic system.

The goal is for the LLM to create possibilities and formulate plans, while the regulatory kernel guards the boundaries and ensures safe, reliable execution.

3. Key Issues And Views

Research value and "water papers": the value of hardcore research lies in deep insight into problems and real solutions, not simple repetitive labor. Automated "water paper" systems lack cognitive gain, while high-quality research processes themselves benefit personal ability.

Use of AI tools: AI tools such as large models can effectively flatten skill-level gaps among researchers with different capabilities. But core "research thinking," such as how to propose valuable questions, how to communicate effectively with AI, and how to judge truth from falsehood, still requires training and cultivation.

Evolution of education: facing AI's impact on knowledge acquisition, education should shift toward cultivating students' core judgment, critical thinking, and cross-domain integration ability, rather than simply transferring knowledge. The role of the classroom should become guiding students to solve problems and engage in deep exchange.

To-do

  1. Open-source project release

Qiang Xu will soon release the regulatory kernel system (OS) developed by his team and invite the community to try it and provide feedback.

Q&A Notes

Liu Weifeng: In the first year or two, my feeling was different from the last two years. AI seemed to flatten the research ability of mid-level and senior people, but recently it feels like the gap has widened. People who can use large AI models well are pulling ahead, and many of them have strong organizational and management talent, abilities they have had since childhood. These abilities let them make better use of AI. This gap seems hard to catch up with and may become fixed. How does Professor Xu view this?

Xu Qiang: I have a different opinion. The Industrial Revolution flattened differences in physical strength, and large models flatten differences in human skills, but I do not think they flatten research ability. The research way of thinking can be trained: how to sense the value of a problem, how to communicate effectively with AI, and how to judge whether information is true or false. If you consciously train research thinking, it will not be flattened by AI. So I deny that research ability in the AI era is strongly tied to leadership and organizational talent. Xu also mentioned in his book that the gap will not necessarily become larger; research has patterns and can be trained.

There are too many papers to read, so you must select carefully. When reading papers, the main thing is to consider how the author arrived at this path, not just the research content itself. Improve your research taste and judgment, and then you can improve the efficiency and effectiveness of using AI for research.

What kind of model to use: use the best AI tools, and pay for the strongest models to improve yourself.

School education: with AI, knowledge and skills become more equalized, but judgment and the ability to choose still belong more strongly to experienced people, such as senior students and teachers.

Children's education: every era has its own needs. It should not be viewed through the lens of people like us who carry the strong smell of the old order. Knowledge and skills themselves are becoming more equalized, so education now needs to change in this respect. Children must have things they are interested in and love. If they love something, they will naturally have discoveries, needs, and ideas, and at that point they can use AI to work on them, making AI the worker for their own interests and passions. People with ideas and ability will not be replaced. But if someone has no interests and no ideas, then AI really may replace them to a large extent.