Geekbench 3 compile options for iOS and for Android (ARM)
Guys,
It seems that Apple's A6/A6X scores have 'jumped' in GB3 (it's difficult to compare with no search, but I used the Nexus 10 V iPad4) compared to other ARM based SoC's in GB2. I'm wondering if you could clarify what compiler versions (and options) were used for GB3 and GB2 for both iOS and GCC on Android?
Regards,
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by John on 29 Aug, 2013 07:22 AM
I'm sorry for the delay in getting back to you.
Geekbench 2 for iOS is built with GCC 4.2, while Geekbench 2 for Android is built with GCC 4.6. We use conservative compiler optimizations (
-Os
) on both platforms.Geekbench 3 for iOS is built with Clang 3.3, while Geekbench 3 for Android is built with GCC 4.8. We use aggressive compiler optimizations (
-O3 -ffast-math -fvectorize
) on both platforms.One of the reasons for the big jump in iOS scores is that GCC 4.2 had poor ARM code generation, especially when compared with more modern compilers like Clang 3.3 and GCC 4.6. We think switching to the latest compilers has "leveled the playing field" (so to speak) between Android and iOS.
Let me know if you have any other questions and I'd be happy to help out.
2 Posted by redblue on 30 Aug, 2013 10:39 AM
Thanks John, makes sense.
For the complete set, what compiler/options are used for x86 on Android for both GB2 and GB3?
Regards.
Support Staff 3 Posted by John on 04 Sep, 2013 06:06 AM
Geekbench 2 for Android uses GCC 4.6 for x86 with similar optimization flags (
-Os
).Geekbench 3 for Android uses GCC 4.8 for x86 with similar optimization flags (
-O3 -ffast-math -ftree-vectorize
). We do specify additional flags to tune for the Atom architecture (-mtune=atom -maes -msse2
). Note that we do the same for ARMv7 as well (-march=armv7-a -mfloat-abi=softfp -mfpu=neon
).Again, let me know if you have any other questions and I'd be happy to help out.
4 Posted by redblue on 04 Sep, 2013 09:33 AM
Thanks John,
It's been awhile since I've looked at the various compile options, but if building for v7, shouldn't the abi option be hardfp? Using hardfp can have a significant effect on performance:
https://wiki.linaro.org/OfficeofCTO/HardFloat/Benchmarks
I guess there could be other dependencies that are forcing the softfp option.
Regards,
Support Staff 5 Posted by John on 06 Sep, 2013 02:10 AM
My understanding is that softfp is a required compiler flag for Android (it's what the NDK uses for both ARMv5 and ARMv7 code). Also, we've shared our compiler settings with a number of Android device manufacturers, and none suggested switching from softfp to hardfp.
That said, we'll certainly look into switching from softfp to hardfp. I don't expect it would make much of a difference, though, since the number of function calls in our floating point workloads is pretty minimal (BlackScholes and FFT being the two notable exceptions).
6 Posted by Oscar on 01 Dec, 2013 10:28 AM
Given that using the "recommended" softfp flag in Android gives worse results that hardfp in devices with FPU, I suggest you could switch to hardfp when hadware FPU is detected. As you know, this can be done at run time.
Usually FPU intensive or FPU mainly dependant android ndk apps use hardware floating point unit where available. Because of this I think is more equitable you use this FPU when available in GeekBench to give more realistic and comparable results for Android devices.
Regards
Oscar
Support Staff 7 Posted by John on 07 Dec, 2013 05:07 AM
Hi Oscar,
Thank you for your message.
We have tried to use
--mfloat-abi=hard
calling convention with Geekbench 3 for Android but it doesn't work as Android libraries expect the--mfloat-abi=soft
calling convention.Don't forget that with both calling conventions floating point operations are still performed using hardware. We might see a small speedup when switching from
soft
tohard
on workloads such as BlackScholes, but generally function call overhead is small compared to the computation time.Let me know if you have any other questions and I'd be happy to help out.
Best,
John