In September 1980, interest rates hovered around 12%, movie audiences were still reeling from the revelation that Darth Vader was Luke’s father, the first personal computers were hitting the market and I was about to enter MIT as a freshman in the Aeronautics/Astronautics department. Computers were pathetically slow and cumbersome beasts in the early 80s. I did some programming in high school on a DEC PDP-11.
I stored my ‘code’ on punched paper tape in a shoe box that I carried around, an obvious technique to raise my standing in high school social circles. Like most engineering departments at MIT, the aero department had a lab class. My class partner and I chose a project called “Thrust vector control by secondary fluid injection”. It was a cool idea to steer a rocket by injecting a secondary fluid into the main engine from the nozzle wall to separate the flow and steer or ‘vector’ the thrust. If it worked it would allow designers to eliminate the heavy, complex gimballing systems that were used to physically move the exhaust nozzle for steering. We built an apparatus and we tested the idea but we also thought it would be interesting to model the system with a computer. I remember distinctly making an appointment with one of the professors who was pioneering something called Computational Fluid Dynamics to discuss the project. With the naïve optimism of undergraduates, we sat in his office explaining our need to model the combusting fluids moving at supersonic speeds in the exit nozzle and then impinged by a cool secondary fluid from the side. He looked at us bewildered. It turns out that what we wanted to do was not possible for probably another twenty five years.
The field of computational modeling has progressed enormously since then. With hardware doubling in capability nearly every two years since 1980, our foundational ability to perform calculations has improved exponentially. In addition to the progress in hardware, there have been very significant advances in algorithms and software design. Simulation and modeling, across a wide range of disciplines, have benefitted from these developments yielding access to faster software, and more refined, accurate and larger models. The phrase “simulation” most often implies simulation of the time evolution of a physical system. Fundamentally it involves i) formulation of physical equations that govern the system ii) mapping those equations to algorithms and iii) efficiently implementing those algorithms on a modern parallel computer. With that basic prescription, you can literally predict the future by running the simulation forward in time. Computational modeling is used in a vast array of diverse technical disciplines. Aircraft manufacturers use it to design and build jets, experimenting with computer models to understand the dynamics of lift, drag and thrust before bending metal into an aircraft. Auto manufacturers use it to explore the aerodynamic, acoustic and stability implications of designs before committing to production. Engine designers use it to model chemical reactions and heat flow and oil companies also use computational modeling extensively in both seismic exploration, for discovery, and reservoir simulation, to optimize production. I am frequently asked, by family and friends, what Stone Ridge Technology actually does, and I have a stock answer: “We market technical software for the oil industry that allows companies to model how oil, gas and water flow under the ground”. That gets across the basic idea and avoids the confusion that an answer such as “we market a petroleum reservoir simulator” might evoke. Our product is called ECHELON and I want to explain why I think it represents a significant development, not just for reservoir simulation but for simulation in general.
ECHELON solves a specific set of governing equations or partial differential equations(PDEs) on a discrete grid i.e., those related to the time evolution of a petroleum reservoir. However, it is analogous to simulation in a wide variety of other fields e.g., fluid dynamics, structural mechanics, climate modeling, etc., which solve different sets of PDEs. ECHELON is unique, however, in that it has been developed from inception to run on massively parallel GPUs (graphical processing units) instead of CPUs. To my knowledge, it is the only commercial technical software application in any engineering field to be so created. As discussed at length in previous blog posts, GPUs move and calculate on data faster than CPUs and their advantages have magnified with each hardware generation. The memory bandwidth and floating point capability of the latest GPUs are now almost 10x greater than the latest CPUs. A decade ago in the early days of GPU computing the first applications addressed were those that were trivially parallel and spent an inordinate amount of time in one calculation kernel, e.g. explicit time stepping algorithms applied to wave propagation or Monte Carlo methods. Despite excellent results from those pioneering efforts many still dismissed GPU computing as a “niche” effort, useful for offloading simple, easily parallelized algorithms. The implicit solution of the time evolution of PDEs on a grid as required in reservoir simulation is not such a problem. It requires hundreds of calculation kernels, not one of which is dominant and many do not easily expose parallelism. The existence of ECHELON, its performance, its ability to treat ultra-large models and its scaling behavior all stand as a powerful refute to those skeptics. ECHELON easily tackles the largest models, runs faster and requires a fraction of the hardware footprint needed by CPU codes. It asserts by demonstration that GPUs can be used effectively for very complex technical codes that form the core of business critical engineering applications.
The performance leaps we have realized in reservoir simulation can similarly be achieved in other fields. I hope the good people at ANSYS, Altair, Dassault and elsewhere are listening. Efforts in this direction would throw open the door for enormous gains with game-changing consequences. We have come a long way since the 1980s, but the world still needs faster simulation and we are now in a position to deliver it with modern algorithms and NVIDIA GPUs. ECHELON demonstrates that this is possible.