I know that if I were an Intel CPU owner from within that time frame, [which I am] I'd be mightily pissed at the huge performance hit, but I'd also be suspicious.... very suspicious. I mean, why has this only come out now... a 10 year old flaw? And why is it necessary for these CPUs to take a such a big performance hit when patched? Does anybody else smell fish?
Ok, there appears to be some misunderstanding in what is going on, and I'm actually going to take Intel's defense on this one:
First, the vulnerability is indeed related to speculative execution, but that's NOT what is being turned off in the patches (thank God, or we would be back into the stone age lol!).
Second - and this is where I can see Intel's side - this vulnerability ONLY happens because of PERFORMANCE optimizations made by *OS vendors*.
Let me see if I can explain this in a simple way: Intel provides 'rings' of security, and external rings cannot access code running in internal rings. So, the Kernel would be running in ring 0, and user code in ring 3. This is great, works as intended and, so far, has no known vulnerabilities.
So what happened? Switching from one ring to another takes some time, from a CPU's perspective. To optimize performance by preventing these costly switch operations, OS vendors 'hide' Kernel code in the same ring as user code, i.e.; each process has its own hidden copy of the Kernel. The Kernel code is still protected by process isolation if user code tries to access this Kernel section (the CPU will throw an exception), but the level of protection is not the same.
Because of this, the speculative execution part of the CPU is still able to access this Kernel area even when currently running user code. It is only when an instruction in user land actually *tries* to access this forbidden part of memory that the operation fails. However, part of this Kernel code might already be in the speculative *cache*.
Using some complicated methods which I don't fully understand and are related to instructions already in the speculative cache being executed MUCH faster than if they weren't, it's possible to, in time, make a map of the contents of the Kernel. This is a really convoluted process, guys!
However, this is only possible because of the performance optimization OS vendors *chose* to implement in their OSs. Intel's fault here is allowing the speculative cache, when running user code, to access memory that would throw an exception otherwise.
Intel provided the mechanism for this not to happen, but OS vendors chose not to use it for performance reasons. In the end something nobody had foreseen - until Google engineers found it last year - came to bit them both in the proverbial bottom.
Some allege that Intel already knew this could happen when they implemented the system, but chose to allow the speculative cache to access forbidden regions of memory for performance reasons anyway. Wether this is true or not I do not know.
The x% performance decrease happens because most of the Kernel code is no longer being mapped into user space after the fix, thus forcing the CPU to switch rings (slower) whenever it needs to access Kernel code. Since the Kernel deals with performing all file IO operations, that is where you will normally see the biggest performance decreases.