Logo   Banner   TopRight
TopUnder
Transparent
An Interview with SiSoft Sandra Dev., Adrian Silasi
Transparent
Date: Dec 19, 2011
Section:Mobile
Author: Joel Hruska
Transparent
Sandra's Development
Last month, SiSoftware released the 2012 edition of Sandra, the popular system analysis and benchmarking program. We recently sat down with Adrian Silasi, the creator and chief developer of Sandra, to discuss how he got started, how Sandra distinguishes itself in a crowded marketplace, and whether we'll see the utility popping up on other devices in the future.



Before we hit the interview, a bit of introduction is in order. Sandra was one of the first popular benchmark suites, but it's evolved considerably over the past 14 years. While we tend to use it for straightforward CPU and memory tests, it's capable of much, much more—particularly when it comes to exploring specific aspects of CPU or GPU performance.



New in Sandra 2012 is the ability to generate a total system performance result calculated using a geometric mean to prevent outliers from skewing the data, a new benchmark result certification engine that allows users to verify that results for their systems match what they ought to be seeing, and the ability to run certain tests on the CPU, GPU, or APU in order to ensure an apples-to-apples comparison. The screenshot above is from the full version of the program, but all of the most useful benchmarks and hardware data is available in the free 'Lite' flavor as well.

The other major use for Sandra, particularly if you do a lot of tech support, is as a handy hardware identification tool / troubleshooter when working with folks who don't know much about their own equipment. While it isn't as focused as, say, MemTest86+, it can be invaluable for exploring a knotty problem.

After nearly fifteen years of work, you might think SiSoftware would have declared Sandra a finished product and moved on to other projects, but that's not the case at all. In our interview, Silasi discussed his long-term plans for the venerable software suite...
Transparent
Our interview
Sandra has been around since the early days of hardware reviews and benchmarking. How (and why) did you get started?

I started way back when my father bought me a Commodore 64; I wrote some programs for it and just progressed from there (music programs as I was studying music at the time – no benchmarks at all).

I went on to study Electronics Engineering (not music to my parents’ disappointment) where we also learned some programming (Eiffel, IBM AS360, etc.) but nothing mainstream (C/C++, Pascal). I just got my first real computer (a 386, bought 2nd hand from University) with Windows 3.1 and bought a set of Turbo Pascal for Windows from a teacher.
I thought: “What’s the best way to learn programming/API?” Write a “do all be all” utility, aka a “system information” app which became “SAW (System Analyst for Windows)” written in Turbo Pascal (!) I released it as freeware to see what other people thought. About a year later, a company wanted to buy it and since I was paying fees (did not have a grant) I sold it to them.

When Windows 95 came out I decided to build a better version from scratch to learn C/C++ and stick to the official SDK & tools. This became “SANDRA” and it just grew from there. It was also released as freeware; when University was over and I could not find a good job someone suggested I make a shareware version, like the great games of the day (e.g. DOOM).


What, in your opinion, sets Sandra apart from the many other hardware benchmark/information suites out there (including AIDA, Passmark, Crystalmark, etc,)?

I think all benchmarks – if fair and valid – are useful for something, I don’t think there is one “end all be all” benchmark – nor there should be! It’s just a question of whether they measure what you are looking for to measure or not.

You left out the 800-pound gorilla in the room, FutureMark: it would be a mistake to have 1 company have the monopoly as with any market. I think everybody mentioned should aim higher and provide real competition.


Are there any Sandra features / capabilities you feel have been largely overlooked?



The program has evolved considerably

Quite a few of them: I am always amazed as to how few of the features get used.

There are benchmarks that measure indexes I did not see in other benchmarks (multi-core transfers efficiency; power management efficiency; GPGPU / APU performance (using the same workloads as the CPU ones and supporting CUDA, OpenCL and Compute Shader); .Net and Java to name a few).

It has an integrated (free) two-way Ranking functionality (not only posts results but also downloads results and performs statistical analysis for score certification) as well as integrated Pricing functionality for more details (pricing, pictures, specifications).


How would you respond to critics who claim Sandra is just a series of synthetic tests (and thereby inferior to "real world" benchmarks?


I think both synthetic and “real world” benchmarks have their uses and neither should be ignored. Synthetics are very useful to “drill-down” and find out the reason for performance issues/gains as they are designed to measure specific performance indexes.


What are your plans for the future of the program? What test suites (if any) do you plan to add in the future?



Sandra's 'Favorites' section is an easy way to access the modules you use most often

That depends entirely on users and what they are interested in; I always try to listen to everybody and if something makes sense I do it. Many of the features in Sandra have originated this way. I’m just as eager to find out what the future holds.
Transparent
Our Interview (cont.)
With all the recent discussion of tablets, will we see a version of Sandra for Android (or for mobile phones?) Is there an iPhone app in the works?

It just depends on whether I have the time to build a decent app that I think people would find useful. I have naturally played with all SDKs from Palm, Windows CE to MeeGo but finishing a full-featured app (and maintaining it) would be a full-time job.

If I were to have started 5-years ago (not 15!) I would probably have written for iPhone/iPad or even Mac OS X. Perhaps one day I will ;))


What program features are you most proud of (or do you think are most useful?)



Benchmarks like Sandra's multi-core efficiency test are a great way to isolate performance metrics that matter -- even if they aren't commonly published test results

Porting the CPU benchmarks for (GP)GPU including APU testing (CPU+(GP)GPU) in all the GPGPU programming methods (CUDA, OpenCL, DirectX Compute Shader, STREAM) was a challenge. They should prove useful now that many modern and just about all future CPUs include a GPU; all modern programs, not just games, will need to harness both CPU+GPU for best performance – and measuring that should be useful.

I find client/server capability pretty cool: it’s even used when running on the same host! It’s a feature users screamed for years to be implemented and once it was I haven’t heard a pip! It must work really well and transparently that nobody notices it…


Have any vendors ever attempted to persuade you to use certain compilers (or techniques) that, in your own opinion, stepped over the line between 'Providing support' and 'Skewing results?'


I always try to see the other person’s point of view: everybody just wants their “best” – and the tools/techniques (if fair) are a way to obtain the best performance out of their product. If the results are the same and there is no artificial penalty there is no problem.

If SANDRA were a performance dependent product (e.g. game, productivity utility) you would see a benefit making it faster (i.e. more FPS, faster work completion time) but for a benchmark there is no gain whatever the score. So there is no benefit to “make it faster” for one product or another which is why I don’t use any vendor compilers.

Techniques are a different case: sometimes you may inherently optimize for an architecture simply because it was the first (e.g. Intel with SSE, AVX; AMD with x64); you may find upon testing new architectures that you have to modify it to work “well enough” for all or worst case have a different code-path using different techniques for a different product. It may or may not be from another vendor although it is more likely to be so.



We hope you enjoyed this interview with SiSoftware's Adrian Silasi. If you have any input or questions you'd like us to ask of Mr. Silasi, please let us know in the comments.


Content Property of HotHardware.com