Reactor design should primarily focus on what is necessary, rather than solely on what is required by regulations.

If something is mandated but not truly needed, it’s often justifiable to leave it out. However, if something essential is omitted and it leads to harm, it won’t matter whether it was deemed “required” or not.

Consider the Chornobyl disaster: the positive scram was not prohibited and wasn’t explicitly against any regulations, yet it turned out to be a fatal design flaw. Similarly, the Fukushima Daiichi incident involved placing batteries in the basement, which led to significant issues.

Ultimately, the designer bears the responsibility, not the regulator. Regulations are created by people and cannot cover every potential design solution. (I’ve drafted some regulations myself and have experienced the challenges firsthand.)

Nuclear safety should not be viewed as a mere compliance exercise.

***

Early dialogue between designers and regulators is not just a courtesy; it is essential for risk reduction, especially when dealing with new reactor concepts.

Waiting until the licensing application phase means you are no longer designing; you are merely defending your choices. This is the worst time to realize that your assumptions do not align with regulatory expectations.

What you considered to be inherent safety, what you assumed did not need classification, and what you believed could be justified analytically may not be accepted in the same way by regulators.

At that point, every mismatch leads to delays—months can turn into years. These delays arise not because the design is unsafe, but because it was never properly aligned with regulatory requirements.

This issue is even more pronounced with novel concepts.

Interpretation is critical, and interpretation requires dialogue.

Early engagement allows for:

Identification of fundamental licensing showstoppers.
Agreement on methods and acceptance criteria.
Clarity on what must be demonstrated versus what can be argued

Moreover, it compels the designer to think deeper—not just about whether a design works, but about whether it can be demonstrated in a way that is credible to others.

In the end, a safe reactor that cannot be licensed serves no purpose; it is merely a theoretical exercise.

The designer carries the sole responsibility for the design. However, without ongoing dialogue, the regulator becomes a last-stage gatekeeper. This is not an ideal time to reassess fundamental principles.

Trying to pressure regulators with tight schedules never works; that is when projects tend to slow down.

***

Nuclear energy is not expensive due to the laws of physics; it’s costly because we have made it that way.

Let’s start with procurement. The term “nuclear grade” has shifted from meaning materials that are traceable and suitable for their intended purpose to referring to rare, overly specified items that come with bureaucratic challenges. When you purchase a valve or a pump, you’re not just buying the component; you’re engaging in a complex documentation process that happens to include that valve. Supply chains are stretched thin, there are few vendors, and every component becomes a custom project. The predictable outcomes are long lead times, limited competition, and prices that are unrelated to the actual materials or functions.

Next, consider safety—or rather, the illusion of safety. We continually add layers: more systems, more classifications, and more redundancy piled on top of redundant features. On paper, this appears to enhance safety. In reality, much of it compensates for earlier design decisions that were never streamlined. Genuine safety arises from clarity, but what we often create is opacity. When no one can fully understand the system, we label it as “defense in depth.”

Automation accentuates this issue. Instead of designing reactors that function reliably on their own, we implement software to manage edge cases. This introduces complex control logic, interlocks, and digital safety systems to keep the plant within strict operational bounds—bounds that wouldn’t be as tight if the fundamental design were simpler. Verifying this software is difficult, and proving its reliability under all conditions is even more challenging. As a result, we end up with systems that are both expensive and, paradoxically, harder to trust.

A reactor with a solid basic design does not require programmable safety automation.

Furthermore, complexity accumulates. Each new system necessitates interfaces, testing, maintenance, licensing, and training. Each interface presents a potential failure point, and each requirement feeds back into procurement, making components even more specialized and expensive.

The uncomfortable truth is that a significant portion of nuclear costs is self-inflicted. We have traded simplicity for layers and are paying for every layer twice—once in hardware and again in the effort required to prove that everything works.

If we start with a clean slate and prioritize straightforward, physically stable designs using components that can be easily procured, a different cost structure emerges. While it may not be cheap, it is grounded in reality rather than in paperwork and convoluted processes.

***

The problem with harmonization of regulations is that everybody has a say. Even those with bad ideas. 

The outcome would be better if harmonization were started among a handful of like-minded countries, and others had no say.  

***

Should we design n+2—or n+1 with a PRA repair window?

In Finland, the answer has so far been simple: n+2 for safety systems.

Internationally, it often isn’t. n+1 with allowed repair times is common practice.

At first glance, n+2 looks stronger. And in a deterministic sense, it is. But the real purpose of n+2 is often misunderstood.
n+2 is not just for failures. It is there to enable preventive maintenance to reduce the work that needs to be done during outages.

One train can be out for planned work.
Another can fail.
And the safety function is still intact.

n+2 places safety in hardware, so that maintenance does not immediately translate into risk.

With n+1, preventive maintenance already consumes your only margin.

From that point on, you are relying on:

repair speed
operational discipline
probabilistic justification

The messy part is this:
Even in n+2 plants, reality can drift further.

Maintenance takes one train out.
A failure takes another.
You are suddenly at n+0.

At that point, the design intent has been exceeded, and you are relying on risk insight—whether you planned to or not.

So the real question is not regulatory. It is whether you want preventive maintenance to be:

fully covered by design → build n+2
part of the risk-managed operating envelope → build n+1 + repair window

Both can be safe. But they are not the same.

n+2 gives you the freedom to maintain without thinking about immediate risk.
n+1 forces you to think every time you pick up a wrench.

In my view, both should be allowed. The designer should be able to decide whether to focus on minimizing component count or outage duration, particularly with new kinds of plants entering the market.

But be clear about the intent:

n+2 → maintenance is routine, risk stays in hardware
n+1 → maintenance consumes margin, risk must be actively managed

Because in the end, the plant will not follow the redundancy count.

It will follow how safely you can take things apart while it is still running.