creating visual output to the monitor is like 3% of what computerprogams can do - you are basically limiting computing to "playing games" here.
it would be more than interesting to see some real life tests for arbitrary computing jobs between the fastest PPC and the fastest M2.
unfortunately most of the applications i would like to run in a MacOS9 emulator have custom copy protection methods, requires access to hardware, or wont run in emulators for other reaons. :/
Be it "3%" or "50%" of "what computer programs do" (that really will depend on the back-end workload, could be actually
less than 1%, and could be 99%), it is 100% of the user experience, simply put.
It's not just games, even merely browsing and poking around the Finder is a lackluster experience, which is one of the reasons Classic mode is so unbearable to put up with on Panther/Tiger as an OS 9 user. Even without a GUI, a CLI interface or text editor will not be as smooth. Not that it's required for achieving much, but it's relevant for the usage experience (and it can have a minor usage impact for users who very quickly respond to visual feedback). This is also why CRTs are a thing (lowest display lag), be it in gaming or otherwise.
(Btw, demoscene is a thing, too, as is watching media in general. That will be 100% of the experience, again.)
For background tasks, like I mentioned, the likes of an M2 or Talos II should be able to surpass the real machine
only for I/O, not any actual processing. It is not just a matter of leveraging the native M2 / Talos II hardware with better software, but it is also the hardware itself that cannot
virtualize well enough because the
hardware is not powerful enough. Cameron Kaiser put a bunch of posts in his Talospace blog some years back, doing tests with his POWER9-based (!) Talos II, meaning CPU emulation is skipped, and it simply is not cut for the job (of surpassing or even catching up to the performance of the real thing, although he seemed happy with his crappy results).
For PowerPC, we simply do not have the hardware that can do it. As far as emulators are concerned, the closest to the real thing, and certainly surpassing the speed of the real thing, is mini vMac, but that's 68k. Basilisk ][ and the others are not as smooth and proper, although they also surpass the original's speeds, but, again, that's 68k.
Better than emulation are things like Wine, and Darling (OS X apps on GNU/Linux). Basically making apps run natively, no virtualization ("emulation") bullshit, regardless if CPU instructions need to be emulated or not. We see this with Executor on Windows and OS X, but it only runs a handful of apps accurately (i.e. Lemmings), but those that
do run, run perfectly, just like a native app, which is beautiful and a sight to behold. Along those lines we also have MACE (
https://mace.software/files/), although you need to do some hackeroo to get it to run arbitrary software, since the devs of it are just simply not releasing the source code, nor releasing it in a way you get to create your own app packages. But this also works brilliantly-well much like Executor,
if what you want to run is compatible.
Answer: No.
Because CISC CPUs, as Intel and AMD, are really bad at emulation RISC CPUs, as PPC family, the main emulation depends only on the max speed in Mhz, no matter the number of cores... And It usually performs only around 25% of this speed compared to an original hardware. So a top intel cpu running a 3 Ghz barely surpasses a G4 at 750Mhz in basic task. Generely a G4 even at lower speeds than 750 Mhz will surpasses any emulation thanks to the extra enhancements as Altivec, Multiprocessors, video acceleration, audio DSP's etc...
From what I could tell, regarding the whole RISC/ CISC thing, we were utterly lied to: Intel processors have been
RISC processors since the Pentium II and the Pentium Pro (
P6 microarchitecture). It's just that Intel/AMD added a CISC instructions
emulator on top of it. In marketing material, they call it the likes of "RISC caching" and whatnot, if memory serves right, but make no mistake: those processors NATIVELY can ONLY run RISC instructions. All CISC instructions are broken down into the
native RISC ones, meaning all CISC instructions are emulated. It seems like there was a rush for RISC processor architecture benefits during the late '80s and early '90s by all CPU companies, and Intel chose the emulation route for their huge backwards compatibility requirements. No single CISC instruction is executed natively in those. (I had a lot of references to material in this topic I had put in the Macintosh Garden some years ago. We can dig those back up, if anyone is curious.)
The clock speeds are relevant, but more than that seems to be
execution pipeline size. I honestly don't know the technical details of that, but you can see the relatively "irrelevance" of clock speed when you compare, say, a 3GHz Pentium 4 (huge pipeline, ultra slow processor that is also extremely inefficient and hot) and one of those Intel Core processors at the same clock speed (Core i3, i5 etc.). The latter will smoke those poor Pentium 4s. Of course there's CPU cache and all that, but this can be observed even otherwise. I believe this is what was dubbed "the GHz myth" or "the MHz myth". There's also IPC (instructions per clock/cycle) etc..
But you are right about your point: Even a "slow" PowerPC machine will run much, much faster than even a 3GHz or 4GHz modern-day Intel CPU. Even POWER9, which does away with CPU emulation entirely, is not cut for the job, since the main issue lies in
(system environment) virtualization. Somewhat surprisingly, CPU emulation is not that much of a big deal. We have even early PowerPC chips running 68ks apps in a way indistinctive from a real 68k Mac in our PowerPC System 7 ~ Mac OS 9 systems all the time, afterall.
Those are my (very inflated) 2 cents.