Refueling criticality safety has a way of appearing settled once it has been reduced to paper, with margins that seem sufficient, boron in specification, and geometry treated as known, even though the situation itself is anything but fixed.

Everything checks out—on paper.

But refueling is not a calculation; it is a chain of human actions carried out in a changing configuration, with an open core and assumptions that are only true for as long as nothing disturbs them, which is rarely as long as we think.

And even the engineer doing the calculations is (sort of) a human.

The calculation is assembled from models, inputs, interpretations, and decisions, and it carries the same potential for error as any physical step on the refueling floor, differing mainly in that its failures are quieter and therefore more easily trusted.

Once written down, a calculation gains authority, and that authority has a way of discouraging further questioning, which is precisely when it becomes most dangerous—not because it must be wrong, but because it is no longer being actively challenged.

Refueling, in practice, is a sequence of such acts, where fuel is moved, positions are checked and rechecked, each step individually simple and familiar, yet collectively forming a system whose behavior is far more complex than any single action suggests.

Small deviations can enter at many points along that chain, each one seemingly acceptable on its own, yet capable of aligning with others in ways that are not anticipated in isolation.

And when they do align, the system does not announce the transition in procedural terms; it reveals it through physics.

Not with alarms, but with heat.

Power rises, and in an open core, where there is no pressure to suppress phase change, boiling follows almost immediately, not as a slow development but as a rapid local response to increasing power. At that point, the distinction between PWR and BWR largely disappears, because with the vessel open, they boil alike. Steam forms voids that transport neutrons all the way to the surface.

All is well if the calculation is correct.

But what if it isn’t?

In BWRs, source and intermediate range monitors provide a direct indication of the neutron population, independent of chemistry, positioning, or procedural correctness, responding to what is actually happening rather than what is assumed to be happening.

In PWRs, there is often no equivalent in-core indication available during refueling, and the safety case rests more heavily on soluble boron, verified concentrations, and administrative controls, all of which depend on correct execution within the same chain they are meant to protect.

In the end, this is not about margins written in a report.
It is about people working over an open core.

One system tells you what is happening.
The other assumes it already knows.

***

At Dampierre Nuclear Power Plant, Unit 4, during a refueling outage in 2001, the reactor moved quietly, almost imperceptibly, toward a configuration that no one had intended and that no calculation had explicitly allowed.

This was not a failure of equipment, nor a dramatic operator error, nor a single misplaced assembly that could be pointed to and corrected in isolation, but a gradual shift in the loading sequence in which one position offset led to the next, and the next, until the physical core being built in the vessel no longer corresponded to the loading map that everyone believed they were following.

By the time the discrepancy was detected, more than a hundred fuel assemblies had been placed one position away from where the design assumed them to be, which meant that the reactor core, as it actually existed in steel and water, had become something subtly but materially different from the one that had been analyzed.

Nothing happened in the immediate sense that operators are trained to expect.

There was no excursion, no sudden rise in neutron flux, no clear signal from the instrumentation that anything fundamental had changed, and the shutdown margin, on paper and in the moment, still appeared to be sufficient.

And that absence of feedback is precisely what makes the event worth remembering.

Because during refueling, a PWR sits in a state that is both physically forgiving and deceptively sensitive, with the vessel fully flooded and strongly moderated, with large reactivity effects tied to relatively small geometric changes, and with only limited, indirect instrumentation available to tell you how close you may be to a configuration that behaves differently from the one you designed.

Each fuel movement is small when viewed in isolation, a single discrete step that appears well within bounds, but the physics does not care about intent or sequence; it responds only to the configuration that exists at that moment, and when fresh assemblies that were meant to remain separated begin to cluster, the local neutron economy improves whether anyone notices or not.

In Dampierre-4, that clustering effect created a region of higher reactivity than the loading plan allowed, not enough to drive the reactor critical under the actual circumstances of the event, but sufficient for the regulator to conclude, after the fact, that under slightly different surrounding conditions the same type of mispositioning could have initiated the start of a nuclear chain reaction.

The calculations, in other words, were not wrong.

The procedures were not missing.

The hardware did not fail.

What failed was the alignment between the intended core and the one that was physically constructed, step by step, under the quiet assumption that each move was still anchored to a correct reference.

A loading map is an abstract object, precise and internally consistent, but it only has meaning if every individual action keeps the real core synchronized with it, and once that synchronization is lost, even by a single position, every subsequent correct action is applied to an already incorrect reality.

There is no immediate signal that marks that transition.

The reactor does not tell you that you are drifting.

It simply continues to behave, until it does not.

We often present refueling criticality safety as a problem that is solved by analysis and bounded by margins, but events like Dampierre-4 show that it is, in practice, one of the most human-dependent phases of reactor operation, where safety rests not only on physics and design but on the fragile continuity between intention and execution, maintained over hundreds of repetitive steps in an environment that offers very little immediate feedback when that continuity begins to slip.

 ***

At Kozloduy Nuclear Power Plant, Unit 5, 2006, the trip signal was correct. Power to the holding magnets was cut.

And yet 22 control rods did not move.

They were not slow. They did not hesitate.
They did not release at all.

The cause was mechanical—armatures that had sat in long, tight contact did not separate when de-energized. With essentially gravity as the only driving force, some remained latched.

The main corrective action was simple and revealing: move them more often. Exercise the mechanism, so surfaces do not settle into each other. Watch release, not just drop time.

But the deeper issue sits against our assumptions.

In analysis, we assume a single stuck rod—bounded, predictable, designed for.
Kozloduy was different: 22 rods, unknown locations, no defined pattern.

Not a worse version of the same problem.
A different problem altogether.

Because once multiple rods stay out, you lose the geometry you rely on. Shutdown becomes contingent—on which rods happened to let go.

And underneath it all is the quiet break in logic:

power off ≠ release

If release is not certain, insertion time is irrelevant.

The reactor did shut down, supported by inherent feedbacks. Still, the event was INES 2, because a fundamental safety function was no longer assured.

Which leads to the point that matters most:

You need time.

Time for the core to settle.
Time for a backup shutdown to act.
Time for systems and people to recover margin.

That time should come first from physics—Doppler, moderator effects, limited excess reactivity—already pushing the reactor toward subcritical when disturbed.

Scram should make it fast.
Backup systems should make it durable.

But neither should be required to be perfect in the first second.

Because if the first second demands perfection, you have no time at all.