Teaching normalization past 1NF in *introductory* database (DB) courses should be considered harmful. Obsessing about normal forms (past 1NF) should be left for *advanced* database courses.

Every basic course on relational databases includes one or more chapters on normal forms: 1NF, 2NF, 3NF, BCNF, 4NF, 5NF, 6NF; and sometimes a few more. These are somehow considered essential. I find that this leads to confusion and most people simply do not know what to do with it, other than to learn just enough to pass an exam. Take a look at your own (personal or business) SQL DBs, very likely they are a complete mess.

This post is geared towards *developers* who want to keep their databases free of logical errors, easily expandable, and easy to maintain. At the same time, to stay away from terms like: functional dependency, join dependency, multivalued dependency, Heath theorem, the chase algorithm, rigorous mathematical definitions of normal forms, and so on.

There is a simpler way, a better way, a beautiful way, once understood by most, forgotten now. It somehow got swept under the sands of time.

Q: *The way?*

Use natural language and logic. Describe your business problem using predicates and constraints. Match relations to predicates (they go together), everything else simply follows.

Q: *What is a relation?*

For a developer, it’s a *data type*. A higher-order data type. It has a precise definition, look it up. The same way there exist integer: data type, value and variable; there exist relation: data type, value and variable.

Q: *Relation value, variable? What does it represent?*

A set of facts, under a common predicate.

Q: *Facts, predicate?*

A fact is a proposition considered to be *true*. A proposition is a *meaning* of a declarative *sentence*. A predicate is a generalized proposition. So, a predicate with a matching relation represents a *set of facts* — sentences considered to be true.

Q: *Store sets of facts?*

Relational variables (SQL tables in 1NF) store sets of facts. Use relational operators to calculate on these facts.

Q: *Relational operators?*

Every data type comes with operators, otherwise it would not be very useful. The same way *integers* come with: addition, subtraction multiplication, …; *lists* with: head, tail, concatenation …; *relations* come with: join, union, project, restrict, (their sql equivalents) etc.

Q: *What are these operators doing?*

Operations on sets of facts: *reasoning*.

Q: *What about joins, are they bad, should we avoid them?*

Should we avoid multiplication of integers, concatenation of strings and lists; unions and intersects of sets, etc.?

Q: *And keys: primary, alternate, foreign?*

Every set of facts needs one or more logical constraints to make sense. These should be spelled out in a natural language, together with predicates. Table keys (`PK, AK`) are simply implementations of these common-sense logical constraints. Similar for `FK`, it is an implementation of an inclusion constraint in between two sets of facts.

It is important to understand keys and other constraints.

Q: *What is this normalization thing about?*

It is all about removing and preventing logical errors.

First, SQL does not work with relations out-of-the-box, but with tables. A table *can* represent a relational variable, but it takes rigor — the burden is on the programmer. When a table properly represents a relational variable it is said to be in 1NF, so this is important– look it up.

Second, and this is the big one: *preventing contradiction*. Contradiction in logic leads to so called "deductive explosion". Steps have to be taken to prevent contradiction. Most of the time this involves implementing proper constraints and eliminating (reducing) redundancy by decomposition into projections (vertically splitting tables).

Q: *How far to go with decomposition, when is it enough?*

Stop when one or more of the following holds true:

- There is no redundancy.
- There is redundancy, but it can not be removed by decomposition.
- It is not possible to decompose table without losing information.
- It is not possible to decompose table at all.

Just *make sure* that when reasoning abut redundancy, you reason about the predicate and constraints, *not* about a few rows of sample data.

Q: *How does redundancy lead to contradiction?*

It is possible to state something that is true and not true at the same time, when redundant data goes out of sync.

Q: *What to do about it?*

Remove redundancy from relations (tables in 1NF), mostly by decomposing into projections. Use predicates and constraints, write them down, start with basic building blocks, use logic.

Q: *What basic building blocks?*

Simple predicates, which lead to sets of elementary facts. Simple predicate is a predicate which can not be split without losing information, same for elementary facts.

Q: *How to use this in design?*

Start with simple predicates and combine them based on constraints (keys). Most of DB normalization story is about decomposing tables, the real question is *how to compose simple predicates*.

Q: *What about normal forms?*

Reasoning about relations yields 1NF, removing redundancy to 5NF, simple predicates to 6NF. In other words, you can get your tables into 5NF even if you do not know the precise definition of it. Btw, useful normal forms can be easily verbalized:

Form | Meaning |
---|---|

1NF | Table properly represents a relational variable. |

BCNF | Every fact depends on the key, the whole key and nothing but the key; for every key (there can be more than one). |

5NF | Decomposing table would not remove any redundancies. |

6NF | Matches simple predicate. |

Take a look at this again, only *meaning* is important, the form name is irrelevant.

Q: *What about 2NF, 3NF, 4NF, and few others; should developers study them?*

Not very useful. If you need to pass an exam, then you do as the curriculum demands. You may decide to study them under "advanced db topics", or to learn some db-history. Maybe you are a DB aficionado, I am.

It is important to understand 1NF, the rest is optional as long as you understand the logic of it (see the table above).

Q: *So not much rigorous math required?*

No, just basic understanding of predicates and constraints in logic, and how do they map to DB concepts of: relations, relational variables, and constraints.

Q: *Any other benefits?*

- Using logic and natural language is a huge advantage when one needs to communicate with analysts, product managers, and other business people.
- It is a good method to elicit business requirements, makes it easy to reason with non-techies; and once done you actually end up with a usable DB model. Bonus documentation.
- Easy to expand. As business evolves, new predicates (tables) are added, as opposed to constant refactoring.
- Schema is stable over long period of time, new tables may be added, but old ones are rarely altered. There is no need to change analytics queries (code) that already work, even if application code changes.
- And finally performance, aka speed. Prevailing mantra "joins are slow" is true only for toy examples, low data volumes, when CPU cycle count is the only resource that matters. Large DBs run faster with narrow, "normalized" tables — a natural consequence of this design approach.

Q: *Any examples?*

See the Eight Queens problem solved in SQL, using this design method.