Home » Technology » Chatgpt has passed a tank test, and the result is a real cold shower …

Chatgpt has passed a tank test, and the result is a real cold shower …

Each year, the baccalaureate gives rise to its share of brilliant, clumsy or unexpected copies. But in 2025, one of them was not written neither by a stressed student nor by a seasoned teacher. Produced by Chatgpt from an official subject of philosophy, this dissertation was noted as any duty. Careful, well written, in accordance with formal expectations, the copy suggested an honorable result. However, Chatgpt in the baccalaureate has thwarted the forecasts, revealing that mastery of language is not always enough to convince on the merits.

France 3 Hauts-de-France a soumis à ChatGPT un véritable sujet de l’épreuve terminale : “La vérité est-elle toujours convaincante”. L’<a href=" https:="">Artificial intelligence had to put himself in the shoes of a high school student, with the objective of making a dissertation structured, argued, and worthy of a good note.

At first glance, the result has something to impress. The text is clear, well written, free from faults, with an introduction, a development and a conclusion. ChatGPT follows the instructions to the letter. The copy seems to meet the formal expectations of the exercise.

But when the dissertation is transmitted to a philosophy teacher, reality is more disappointing. Despite its academic appearance, the copy lacks depth, real analysis and nuances. The verdict is then final. The chatgpt rating in the tank only reaches 8 out of 20. A score that highlights the structural limits of the model.

Chatgpt to the Bac note reveals the limits of AI in the face of human thought

The professor’s correction did not stop at a simple rating. It has highlighted several major faults in the production of artificial intelligence. One of the most striking reproaches concerns the very treatment of the subject. Chatgpt moves the initial question “the truth is always convincing” towards a more vague problem: “the truth is enough to convince”. This shift weakens the relevance of the analysis.

Another point underlined by the teams of France Infothe dissertation plan is too visible, almost mechanical. Where we expect a fluid progression of ideas, AI offers a succession of logical blocks, without real common thread or subtle transitions. The argument remains on the surface, the examples are cited without real perspective, and philosophical notions are not defined.

This observation highlights a major limit of artificial intelligences today. They formulate ideas, but don’t really understand them. In philosophy, writing is not enough. We must also question the meaning of words and dig into concepts. The exercise requires an original reflection, which AI does not yet master. She follows known patterns, without offering new visions.

It is not the first time that it has been put to the test on this ground. And despite the many updates, the observation remains unchanged.

video-pub-post" itemprop="video" itemscope="" itemtype="http://schema.org/VideoObject">

Why good writing is not enough to convince in philosophy

Chatgpt is not a bad student. She knows how to write, structure a speech, and produce content in a few seconds. But her inability to embody an original thought penalizes her heavily in an exercise as subtle as a philosophical dissertation.

The shape of the copy was not criticized. She could have convinced on the strictly editorial level. But on the merits, the copy remained hollow. The teacher immediately pointed out, without detour or indulgence. A real high school student would probably have been better. He would have spotted what was missing from the first lines. Intuition still makes the difference. The critical thinking too. These human qualities remain out of reach for artificial intelligence. Especially when you have to explore the gray areas of a subject, where the answers are never fully made.

This experience led by France 3 shows an essential thing. A well written copy does not guarantee good reasoning. The note obtained by Chatgpt in the bac is proof of this. Artificial intelligence writes without fail, but does not really think. It follows a logic, of course, but fails to grasp the shades specific to the human mind. This is where the real difference is played out between knowing how to write and knowing how to think.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.