Latest Posts

Seek financial independence

Although I can be rather formal and I’m generally not a rule breaker, I’m also somewhat of an anticonformist with a strong need not to feel like another cog in some machine, be that machine society or place of employment.

As a kid born and raised in Italy, I remember this culture of aiming to someday "find a job". Passion for computers and programming and a willingness to move outside the country was thankfully my ticket away from a possible life that I would have considered a curse for myself.
I then proceeded to work for many years in game development, doing mostly 3D programming, something that always compelled me, because it was both technically challenging and also offered the exciting perspective of tapping into the creation of virtual worlds where one could easily experience new things.

On my first real try for independence I still went for video games, because that’s what I knew best and because there were things that I wanted to create and publish, which can be quite rewarding.
However games aren’t really such a great business. Today, game development has largely been commoditized. The first mobile hardware still required a certain degree of technical skills to publish something noteworthy, but that has progressively not been the case.

Although I think that there’s still a lot of room for application of technology to game development, that’s something that is better done in a larger team, as an employee with maybe a great salary, but still an employee nonetheless, at an age when one is supposed to become a manager and stop worrying about software engineering… no thanks.

From that perspective, the best move that I could have done was probably what I did when I put all my efforts into making something of algorithmic trading. It’s taken a lot of time and effort to finally have some degree of confidence in it, but it’s given me a direct path to build wealth. Unlike when developing a game, right from the start there’s a sense that one could put some algorithm live on some market and start to print money. That’s unfortunately not the case, and all things considered it still took a good couple of years of continued work before hoping to truly see some profits being generated with some degree of confidence.

Nevertheless, to be working with finance it’s still a much more direct path to wealth, and one naturally develops skills necessary to understand investing, something that everyone should know something (or a lot) about.

All this may sound like I’m obsessed with wealth, but it’s really more about independence. We live in an unfair world where money is never enough. At some point or another, one needs an excess of wealth to solve some problem. My paranoid side tells me that it’s a mistake to live a standard live with a good salary and hope to get comfort, health and occasionally some justice… that is of course if one values his own individuality, a view that nowadays may not be as popular, but to each their own.

Truth and reality

Premise: Not breaking any new grounds here, but laying it down as a reference.

Something is true if it’s real. Everything that we conceive is the fruit of perception, which is an intake of signals from our sensory abilities, such as vision and hearing. For all intents and purposes, there is no objective reality. Our mapping of reality is limited by our sensory abilities, which are limited by nature. Reality is also how we integrate those signals into our model of the world.

To try and establish a common base reality, one should follow two major rules:

  1. Reality is defined by consistence. This is at the root of the scientific method. Nothing can stand on its own, unless it’s consistent with the rest of the established theories built on observations. This does not mean that established theories cannot be changed, but they should only be changed or amended as long as the new model adds new details that gives a better understanding of nature.
  2. The observer should question the environment if there’s a sense of impaired mental capacity of the self. This is to avoid dream-like states of mind. When dreaming, belief is usually momentarily shaken by the fact that one is unable to perform trivial mental exercises, such as actually looking at a screen with code and being able to edit and debug it. This is a sign of the fact that the brain is busy trying to generate its own reality instead of simply processing inputs from the real world. In popular culture this is sometimes defined as pinching oneself to see if there’s a sense of pain. The idea is to perform a sort of brain pinch to see if there’s struggle to achieve a level of mental acuity that is known to be possible.

Of course, these rules are relatively vague in themselves, so for practical purposes they are guidelines, but I believe that they are what one should strive for to establish a reality to work with.

It’s very easy to claim to be consistent. In fact, one should always apply some self-doubt at certain junctures to rethink on whether or not he/she is indeed being consistent and see if perhaps there’s a deeper level at which this may not be the case anymore, such as when added details would negate the consistency of thought.

The take-profit fallacy

In trading there are these stop-loss and take-profit concepts. The former is a necessary pain, the latter is an unnecessary long-term pain, masked as a short-term joy.

A stop-loss is an order to sell if the price falls too far below compared to the price of purchase. This is to prevent losing too much in a single trade.

A take-profit is an order to sell if the price has risen to the point of considering the purchase a success and just get out of it.

However the price moves continuously and there’s a high risk of missing more potential profit, simply because one has decided that a, for example, 3% is good enough for the day.

In trading more than anything, if you’re not earning you’re losing, to settle today for a limited profit means to have a negative balance a few months down the road. All missed profits add up and make a sensible difference on the long term balance of a fund.

One intuitive solution is that of using a take-profit and then make it follow with another buy once the price drops, so that one is still inside the trade, and now even with more assets/margin. The problem is that the price may as well just go up instead, and then one will be forced to buy again, but at a higher price, losing precious buying power. Then the price may eventually just drop anyway and force the trade into a loss.

Of course there may be cases where it makes sense to sell before what the main indicator may suggest… this however should be done under the reasonable expectation that price will drop indeed. Momentum of the market should be taken into consideration, instead of simply settling for a quantity of profit.
One way I do this is at known price levels (calculated by the concentration of past trading volume) that may be a point of resistance. Even then, it’s very easy to stumble into false positives. I use this in my algorithms, but very sparsely.
It sometimes works, but only because there’s indication that the price may indeed drop, not simply because there’s a sense that x% ought to be good enough for now.

Above are two examples of selling at a significant level while expecting a rejection. This is more sophisticated than simply selling after a certain amount of profit, but it can still fail and have an adverse effect on the total balance. What happens when a level is reached could be a rejection as well as an upwards rally.

Digital Perspective

Quantization and resolution of data are two key concepts in computer science that are at the root of the digital revolution. I think that even a cursory understanding of those concepts can be useful towards building a mental framework in trying to seek objectivity.

Quantization

For my generation, the term digital was popularized with Compact Discs. CDs look metallic and shiny and are read by lasers. They fit very well the ideal of something new and futuristic.

Lasers aren’t however what makes CDs so much a digital support. CDs are digital because they store information that is quantized. The perfection in reproduction of a CD is due to the fact that there is a a-priori determination of what the data in a sound sample should be like.
Quantization is a process necessary to encode data for digital storage. It sets the boundaries for a relation between a physical microscopic deformation of a lump of atoms on the CD and a number that it represents.


Bits on a CD.

Quantization is also wasteful, because precision is only achieved by an abundance of physical space to encode each piece of data. Such space is used to avoid ambiguities that could arise from subsequent deformities of the support. Notice that significant deformation would lead to errors in the data, this brings the need for an encoding format that can deal with errors, but we won’t bother with that here.

Quantization can be seen as a signed contract between the writer and the future readers that spells out the exact data format and the amounts of bits of data that were used to digitize the input signal, be that a sound wave, an image or anything else.

This is a fundamental perspective as one is trying to determine what’s truthful and correct in the generic sense. Of course, truth as it’s applied to everyday life is infinitely more complex than the playback of a sound track, but this is a valid concept nonetheless. Even if it objective truth can’t be achieved, it sets a point of reference that can be kept in mind as one strives for objectivity.

It should be noted that legal text also tends to assume a format that is somewhat objective. This is again done by a certain structure as well as an abundance of information. Legal text is however more objective as it pertains to the structure of the content, but not on the actual claims. In fact, in modern law, truth is to be found in the middle of two clearly partisan perspectives.

Resolution

Digital storage relies on physical allocation which is finite and this brings the issue of resolution. Each piece of information is stored, retrieved and processed at some level of resolution, or granularity. It’s a concept that it’s easier to imagine with an image, where the number of pixels determines its spatial resolution, and where a higher resolution image can reveal more details that may be invisible at lower resolutions.


Same image at multiple resolutions.

The same point of view can be applied to any problem. One may say that his home is being invaded by scary monsters, and he would be right as long as one is looking at the carpet with an electron microscope, spotting countless dust mites.


Scary monsters in your house.

This of course it’s an extreme case, but it’s illustrative of how statements can be true or false depending on the resolution at which one is operating.

We intuitively know that taking a broader perspective on things, instead of focusing on details, is one way to avoid to worry needlessly. The suggestion to “take a step back” or “look at it from the outside” can be thought of as a suggestion to lower the resolution on a problem to avoid getting entangled in the noise.

Day traders also know the dangers of zooming into a bar chart to look at just a few hours of 1-minute candles and feeling like the market is constantly on the verge of exploding up or going for a colossal dump. A quick zoom-out instead shows a much more stable and static price chart, one that is more comfortable and also more productive to operate at.

Objectivity is a concept and it’s therefore made up. To carve a space in which one can argue in an objective manner it’s important to determine a resolution and to stick to it for as long as possible. A debate can get very confusing if people involved decide to argue at different resolutions.
This shifting of resolution often happens, sometimes to highlight the importance of a certain level of detail, but many times simply to find a level, a resolution, at which one’s own argument is still valid. Needless to say, that that’s not a profitable way to come to an objective conclusion.

My Space !

I was never the biggest fan of social media. I embraced Twitter quickly because of the novelty factor, although it never quite made sense and it still doesn’t, probably for most people that have tried it.
I refused to be on Facebook for years, but I eventually caved, in part because I felt that I was missing the fun, in part because I needed it to promote my mobile games.

Social media does have its uses, having some sort of instant connection with many people at any given time can lead to interesting exchanges and to some business even. It’s also a quick way to keep an eye on friends and family (Facebook mostly). However, social media can also be a big time waster, it’s limiting because it pushes one to express himself with short sentences, and it’s also weird in a way: there’s this constant state of observing and being observed, or rather, being scrolled-up. It’s a very passive media.
Content is also not so great: after a while, people tend to repeat themselves quite a bit, me included. What’s interesting about people is how they grow intellectually with time. This is not something that can be noticed on social media because it’s all in bits and pieces and arguments.

That said, the decision now to start to focus less on social media comes more from the realization that it has become an oppressive environment. It started a few years ago, but it’s escalating really quickly since 2020, mostly because of the US Presidential Elections.
Twitter, Facebook and YouTube in particular have set themselves to be the gatekeeper of truth, mostly about politics, but not necessarily. Some people embrace censorship with open arms, because they figure that it’s objective and done by "the good guys".
Of course the premise is laughable. It’s a fundamental, and frankly basic error to think in terms of solving the World’s problems by establishing a Ministry of Truth of sorts.

It baffles me to see how certain fundamental questions about the importance of freedom of expression have been debated for hundreds of years at least, and yet, today the average person is still raised seemingly oblivious to that which should be an obvious conclusion, somehow having to rediscover it once again for himself or herself.

Being a fan of freedom of expression and of freedom in general, I’d feel complicit in eroding those rights if I continued to give as much time to social media as I am today.

It should be noted that there are social media alternatives today that pose themselves as free speech alternatives. While I have great hope for those, I also think that it doesn’t make sense to put too much content specifically on yet another service, which will either die off or live long enough to become the next villain.

The state of the F-35 sim project

Here’s an overview of the current state of things. I’ll post some screen shots.
A few videos would have been better, but they take more time to make.

The platform

Image from an older tentative project (art by Max Puliero)

The game (?) doesn’t utilize any commercial engine. It’s all home made, starting from the code base of FCX mobile, and ported and evolved to currently OpenGL 4.5. The programming language is C++, plus occasional Python scripts for the build and asset pipeline.
One advantages of having an in-house system is that there is no dependence on specific proprietary software. There are no implicit limitations on what can be done, no licenses to pay and no expiration date on the code base.

The engine features are pretty much common nowadays, so I can’t make any big claims, but nonetheless:

  • Forward rendered. I was never a fan of deferred rendering (read: z-buffer legacy). Forward rendering is good for my main target, which is VR, so 90Hz stereo and MSAA. Eventually at least a z-prepass will be implemented for SSAO and such, but not currently.
  • Image Based Lighting (IBL) with some relighting features. Cube maps rebuilt on the fly, then converted to the usual Spherical Harmonics bases for lighting. This is especially useful for elements of GI in the cockpit.
  • Physically-Based Shaders (PBR). This is nothing particularly advanced. Just the run-of-the-mill PBR shaders. Good looks aside, it’s mostly useful to have a robust art pipeline, as in having a method for artists to produce assets that have a consistent quality and that are relatively independent of the lighting in the scene.
  • Cascaded Shadow Maps (CSM). This is also a fairly mundane feature… in theory… because in practice shadows are never easy and never fun. A lot of time was spent between the filtering and the tuning to balance the resolution of the cascades. The end result is good enough for now. But shadows are always a pain.
  • Custom image compression. All images and textures are converted in internal formats built both for lossless and lossy formats. The Lossy format is DCT-based (similar to JPEG) and encodes data so that images can be decoded on the fly at reduced resolution. This is ideal for streaming and LOD on-demand. For example, a large 8k texture can be decoded at 1k for an object that is far away and that may not need to full resolution for a while. This is also useful during development and debug, to speed up turn around time: everything loads instantly with tiny textures.
  • GUI system. A complete GUI system that is used both for in-game development UI and for actual game UI. This is fairly customizable to the point in which it’s being used to reproduce the F-35 avionics GUI.
    The GUI system also works with multi-touch, game pads and VR controllers.
  • VR support. Both at the rendering level and at the UI level, there’s a strong VR support. As far as hardware goes, currently only the Oculus SDK is supported, however there’s an abstraction level (as it should) that makes it relatively simple to support other SDKs.
  • 3D Audio. There’s basic 3D audio abstraction, build on OpenAL, including some workarounds for the limitations of OpenAL. However the audio system should be replaced with a more modern 3D audio API, to be decided.
    There’s also custom audio sample compression format based of Wavelets that I did in a moment of need during mobile development (nothing special, but simple and fast).
  • Data driven. There’s a basic structured data definition language that I built for my needs. It’s similar to JSON, but better. Types can be specified, syntax is more lax in some ways, comments are supported (woohoo !) as well as algebraic expressions for numerical types.
  • Physics and collisions. This is actually mostly on the side of game code. The physics engine at its core is just the average rigid body dynamics. Acceleration for collision detection and ray-casting is done via a voxel subsystem (I like regular structures).
  • Particle system. Particles are at the bare minimum. Particles can be very important, but I’m not a fan of creative particle editing. I prefer to build a physically-based behavior in code.
  • Atmospheric scattering. This is vital for a flying game. It gives a believable feeling of distance, and also a base model for the Sun. It’s based on Sean O’Neil’s article and code from GPU Gems 2. Kind of old, but works great (after a few fixes of some corner cases) and it’s not as complicated to implement as some more modern models.
  • Continuous build system. A Jenkins-based continuous build takes care of constantly updating the latest binaries and assets. Periodical rebuilds are performed and release packages are deployed to a chosen destination.

The application

The project is not much of a game anymore, rather it’s aiming to be a light-weight simulator, focused on the F-35 Lightning II.

Note 1: a lot of what I learned on flight simulators comes from the users of the Hoggit Discord chat (related also to the Reddit forum). If you’re serious about this stuff, that’s one place where you want to hang out to.

Note 2: the plane and cockpit in the screenshots is not the actual F-35, but rather a concept by Max Puliero. He also modeled the terrain that can be seen in the background.

Terrain system

The terrain system is pretty basic, but it can cover about 500 x 500 km terrains with several large textures.
The build pipeline does polygon reduction (based on the old Stan Melax’s GDM article !) and converts large terrain texture in my own LOD format.
There’s currently no vegetation system, so this would be an ideal next step. Much more needs to be done and redone for the terrain.

Weapons system

Weapon systems are pretty important and a fairly complex part of combat simulators. They are a world onto itself.

I’ve implemented some basic missile guidance, built on a flight model that is based on the airframe (shape, wings, weight), and the rocket motors. The current reference weapon used for development is the venerable AGM-65 Maverick. Although it’s not part of the F-35 arsenal, it’s a missile that is understood well enough to be tested with some confidence.

The guidance model that I implemented is relatively simple. In practice it’s probably similar to the classical proportional navigation, but the auto-pilot will have to be refactored to use PI(D) controllers which may be a better match for proportional navigation (I think).
Still, it’s advanced enough to manage to hit a target by steering the virtual wings, even during the gliding phase, when the rocket motor is spent and careful maneuvering is essential to avoid losing too much energy.
Guidance is also recursively simulated in order to provide a Missile Launch Envelope (MLE), to the plane avionics, mostly to be displayed on the HUD/HMD for the pilot to understand the likelihood to hit a target from the current position, velocity and heading. The MLE portion of things was kind of exciting, because of its recursive nature. Basically it’s a simulator that has to continuously and extemporaneously simulate potential launches, from start to end. At least that’s how I implemented it, although it’s likely that in real world weaponry use simpler analytical solutions… but if you’re doing a simulator, most of the work is already there, so one might as well reuse the simulation code.

The Plane’s Flight Dynamics Model (FDM)

My current plane FDM is weaker than that of the missiles. I started off with this one and it was a learning experience. Some things are implemented right, some other will need more work. For starters, because I didn’t have any official (or unofficial) drag and lift coefficients, I resorted to using specs of the F-16 and the F-15.

Doing a plane’s flight model is pretty interesting. But also frustrating. One big issue is that modern planes are all fly-by-wire (FbW). So, assuming that one implements an accurate flight model, then he’ll have to also implement the Flight Control System (FCS) that in many cases does cancel out the aerodynamic properties of the plane !

That’s not to say that making a FDM is a waste of time, because it’s an important underlying factor that influences other things (structural integrity, actual maneuverability, energy consumption), but still, it feels like extra work.

Most crucially, this also means that one can’t quite tell the aerodynamic properties of a plane by just flying it. because who knows what’s the FCS really doing behind the scenes (at least in the case of something as sophisticated and classified as the F-35).

Still, much can be done by following the breadcrumbs scattered around between release of specs and articles on the subject. For example, a test pilot may reveal what’s the maximum Angle-of-Attack (AoA) that a plane can maintain at a certain speed, and from that one can attempt to determine what the lift coefficient (Cl) would be.

It’s a big topic and I only scratched the surface.

The Avionics

This is the big one, and the reason why I started looking into the F-35. By avionics here, it’s meant mostly those that are related to the UI.

The F-35 has two large touch screens, with a custom windowing system that allows to split views into portals that can be configured, maximized or minimized (sort of). The pilot interacts mostly by touch but there is also support for a cursor that can be moved around (usually with a tiny stick on the throttle handle).

Primary displays, with one portal menu selection open.

I’m pretty pleased with my reproduction of the system. Although many of the widgets that can be selected are incomplete and may just don’t yet exist. The look and feel seems pretty believable. However, beyond the plain display of some portal view, there are tons of details that can take one down into a very deep rabbit hole.

For example, the Technical Situation Display (TSD), shows the shape of what is probably the SAR (Synthetic Aperture Radar), showing what areas are in the scan range. The shape of the SAR scope is tied to some details of operation, including some blind spots. So, one starts off trying to reproduce a couple of curves on the TSD and may end up looking into radar technology.

Another example is the engine display, that doesn’t simply give an RPM, but also the Exhaust Gas Temperature (EGT). Now, to give a believable display of that element, one needs to built at least some logic to simulate the basic behavior of a jet engine, its stages, the range of temperature it gets to.

HUD portal maximized taking the entire left screen. Also ENG(ine) widget (only EGT and Throttle gauges working).

Much work went into the TSD, because it’s what allows to scan for and schedule a kill-list of possible targets. But also, more recently, on the auto-pilot (AP) panel, which allows to punch in number codes for altitude, heading and speed for the plane to maintain (it works rather nicely… for some reason now it’s more fun to program the AP, than flying around freely, probably because it feels like it has its own mind).

Auto-pilot climbing at the selected altitude of 230 (23,000 ft).

The HMD/HUD

There is no fixed HUD in the F-35. All classical HUD symbology is projected directly into the eyes of the pilot, like some sort of AR glasses. This is a great excuse to use VR.

My HMD/HUD implementation is farily decent, I think, and with a good amount of details. However some problems still need some work. Specifically, one big issue is the fact that the HMD gives a sense of additive lighting, leaving the background images clearly visible.

Stereo projection allows to draw, for example, a target symbol at a virtual distance far away, right where the target is. However if a piece of the cockpit is in the way, then the contrast between the far away symbol and the close-by cockpit LS

From arcade to simulation (gunning for the F-35)

Here’s is a long overdue project update, also with some clearer details on the current direction.

I’ll be writing from a personal perspective, because for the time being it’s just me (Davide) working on this. All art is by Max Puliero, as usual.

First of all, the goal has pretty much shifted towards making an F-35 flight simulator (more or less, considering that most info are classified, and considering the sheer complexity of the real thing).

A Harrier’s HUD. Gritty and functional.

I’ve always been interested in technology more than the games themselves. I had a taste of flight simulation development while creating FCX, where the goal was to make a sci-fi game that was also plausible, which is one reason why there were no guns (other reason being… laziness).

While developing FCX, I often found myself struggling to implement things such as the HUD (Head-Up Display). An HUD looks cool aesthetically, but why it is how it is, and what do all symbols mean ?

Once one starts learning how to read the real thing, it’s very hard to see another game HUD again without having a chuckle. It’s a bit like hearing someone acting in a foreign language, until it’s your own native language, then it’s just funny… gone is the suspension of disbelief, forever.

whatever…

This project was meant as an evolution of FCX, focused on VR on PC and therefore it had to give a compelling cockpit experience. I started taking the F-35 avionics as a reference and then found myself learning more and more about the airplane and about airplanes in general (at least beyond a general passion that I may have had in the past).
I also realized that there is an healthy flight combat simulation community that produces mods with realism that goes well beyond what even the average gamer may think. Modern flight combat simulators are a niche unknown to most, but the level of realism and the involvement around them is nothing like the sims from the 90s (last time they almost weren’t a niche).

At the current state however, this is still pretty much an hobby project. Ideally, we’ll be able to find someone that believes in the project and that is willing to support the development. It’s more likely however that I’ll have to continue in my own free time, which is now more scarce than ever. In fact, I pretty much had to pause development for the past two months.
Still, I put so much effort into this (some to be detailed in the next post), and it would be a waste to leave it as it is.

Avionics’ improvements

Much work went into the game since the last update. Some of this will require a longer post to be explained. I’ll post here some screenshots relative to the more recent work that is more clearly visible.

Here’s an image of the latest digital display that represents pretty much all of the avionics GUI in the plane.
The display is heavily based on that of the F-35, as seen on public displays of simulators of the airplane.

The display is divided in 2 screen halves, each of which can contain 2 vertical windows or panels, each of which can have 2 child windows at the bottom.
The F-35 windowing system leaves some room for reorganization of the layout, something which I haven’t yet implemented, but that will come eventually.

Some of the windows in display are at least partly functional. Left to right, the Store Management System (SMS), the Tactical Situation Display (TSD), the Forward Looking InfraRed display (FLIR) and a generic map display, which will have to be replaced.
The SMS has received some cosmetic improvement, while the TSD received the bulk of recent improvements.

The TSD has now a cursor that can be used to select a potential target, zoom on the area to determine if more targets are overlapping, and then designate a target to be shot. This is especially important for ground targets, which are usually planned early in a mission.

The FLIR display is not active in the screen shot but, when active, it produces a pseudo-IR zoomed view of selected target for visual confirmation both when designating a target and later, after the target is hit, to asses the damage.

Focus on the window on which controller and keyboard inputs act on, is determined by the window on which the mouse us hovered on (non-VR mode), or by the window at the center of the visible area (VR mode). A green border is also used as a visual confirmation of the window currently receiving input messages.

The general display quality was also increased, both in terms of resolution and by increasing the number of MSAA samples.
The game uses MSAA anti-aliasing, both for the final rendering and for the rendering of the cockpit displays. This is important because if we’re to simulate actual instrumentation with the right proportions, then we also need extreme clarity of display.

Better physics for missiles

We’ve recently taken a break from graphics and UI avionics rendering, to focus on physics. More specifically on aerodynamics.

Sample image or an AGM-65 gliding to hit a tank

This came about as missiles needed and improvement on hit precision.
A game can cheat at will and always make a missile hit a target, however the effect is not pleasing and the strategic element goes away, as behavior is no longer tied to laws of nature, making the experience less believable.

A missile’s ability to hit a target is determined by its capacity to individuate and seek a target, but also by the raw performance of its rocket motor and its air frame, or body.
Air-to-ground/surface (AG or AS) missiles don’t necessarily have enough rocket fuel to reach a target, and may often end up reaching the target while gliding, much like a smart bomb.

For this reason we decided that improving the aerodynamic simulation was important to simulate the nuances of hitting a target, with realistic weapons with realistic physical parameters that give them specific advantages and disadvantages.

We’re currently working with a few real-world references, so to have a rough idea of what kind of rocket motor and air frame should correspond to a certain performance range.

AGM-65 airframe debug view

Here’s a wire-frame debug screenshot of a model of a Maverick AGM-65 in our game (graphics are in pure simulation mode, not representative of the actual game graphics).

Blue areas are the definition of the wings.
Magenta lines represent the drag force (Fd).
Cyan lines represent the lift force (Fl).
The smaller green box represents center of gravity (CG).

Much more work is necessary, but it’s already exciting to see a missile fly using the proper laws of physics, although this has complicated things a fair bit.

 

At this point, the aerodynamics of the actual airplane is less advanced than that for the missiles, but in time, the improved model will be transferred to the plane as well… although with jet fighters is not that simple, but this is a topic for another post…

Check out NASA’s friendly explanation on the major 4 forces acting on an airplane, or a missile.