Approximation All the Way Down
The equation that governs the behavior of electrons in a molecule is the Schrödinger equation. It is exact. Given the positions of all the nuclei, it produces the exact electronic wavefunction, from which every electronic property of the molecule follows. It is also completely intractable for any system with more than a handful of electrons. The computational cost scales exponentially with the number of electrons, and a molecule of practical interest has hundreds or thousands of them.
Every method that chemists and materials scientists actually use to compute molecular properties is an approximation to this equation. The entire field of computational chemistry is the management of this approximation hierarchy, and the skill that matters most is knowing what you gave up at each level and whether it matters for your question.
The hierarchy
The first major approximation is the Born-Oppenheimer approximation: treat the nuclei as classical point charges fixed in space, and solve the electronic Schrödinger equation for that nuclear configuration. This decouples nuclear and electronic motion, which is justified because electrons are much lighter and faster than nuclei. It breaks down for processes involving excited electronic states or conical intersections, which are important in photochemistry and photophysics but not in the ground-state chemistry most materials work involves.
The second major approximation is density functional theory. Instead of solving for the many-electron wavefunction, solve for the electron density, a function of three spatial coordinates rather than 3N electronic coordinates. The Hohenberg-Kohn theorem guarantees that the ground-state density determines all ground-state properties, so the density is sufficient in principle. The problem is that the exact functional relating the density to the energy is unknown, and every DFT calculation uses an approximation to this functional. The exchange-correlation functional is where DFT's errors live.
Different functional approximations fail in different ways. Local and semi-local functionals underestimate van der Waals interactions. Hybrid functionals include some exact exchange and perform better for many properties but are computationally more expensive. Self-interaction error causes problems for transition metal systems and charge-transfer states. No functional is uniformly accurate, and knowing which functional is appropriate for which problem is a significant body of practical knowledge.
Force fields and neural network potentials
Classical force fields approximate the electronic structure calculation entirely. Instead of solving for the energy quantum-mechanically, they parameterize it as a sum of simple terms: bonds are harmonic springs, angles are harmonic springs, torsions are cosine series, non-bonded interactions are Lennard-Jones plus Coulomb. This reduces the computational cost by orders of magnitude, enabling simulations of millions of atoms over microseconds. It also introduces errors that depend on how the force field was parameterized and for which class of molecules.
Force fields parameterized for proteins perform poorly on polymers. Force fields parameterized for small organic molecules fail for organometallics. Force fields with fixed charges cannot model polarization effects. ReaxFF, a reactive force field that allows bond breaking and formation, is parameterized for specific chemical systems and does not transfer reliably outside them.
Neural network potentials sit between DFT and classical force fields. They are trained on DFT data and learn to predict DFT-level energies and forces at a fraction of the computational cost. Their accuracy is bounded by the DFT data they were trained on, their transferability is bounded by the chemical diversity of the training set, and their failure modes are different from those of classical force fields but no less real. A neural network potential extrapolating outside its training distribution can produce energetically nonsensical predictions without any warning.
The skill
The practical skill this hierarchy demands is the ability to match the level of approximation to the question being asked. Using DFT when a force field would suffice wastes compute. Using a force field when DFT is required produces wrong answers. Using a generalist NNP trained on small organic molecules to simulate an inorganic ceramic produces confident nonsense.
A question about relative polymorph stability requires DFT accuracy on total energies, because the energy differences are small and the force field approximation introduces errors larger than the signal. A question about the diffusion coefficient of a small molecule in a polymer matrix requires long timescales and large system sizes, which only force fields can provide, and the qualitative answer is robust to the level of quantitative error a force field introduces.
Knowing which question you are asking, and which approximation is adequate for it, is the majority of what makes a computational materials scientist useful. The calculation is almost never the hard part. The hard part is deciding which calculation to run, what its outputs actually mean, and how much of the answer to trust.
All of science is approximation. The periodic table is a simplification of nuclear physics. The ideal gas law is a simplification of kinetic theory. Newton's laws are a simplification of general relativity at low velocities and weak fields. This is not a deficiency of science. It is the method. Every simplification enables analysis that would be impossible at the level below, at the cost of accuracy that may or may not matter for the problem at hand. The craft is in the judgment about which costs are acceptable, and that judgment requires understanding both what the approximation gives you and exactly what it takes away.