Tuesday, November 20, 2007

Benchmarks everywhere

It seems that my benchmarking "initiative" caused some effects, and many vendors/frameworks are doing the test with the same tool.

AIDA/Web vs Seaside:
http://www.aidaweb.si/benchmarks/wapt-swazoo-20.html

Seaside on VisualWorks, Squeak, vs Ruby on Rails :
http://www.cincomsmalltalk.com/blog/blogView?showComments=true&printTitle=More_Seaside_Testing&entry=3372921925

My own tests (which will be replayed):
http://dolphinseaside.blogspot.com/2007/11/some-benchmarks.html

And the initial, and more exhaustive, Seaside on Gemstone benchmark:
http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/

Considering all of us are using the same tool I've choosen (WAPT), we should define a clear and unique test suite, upload it to a public place, and run it against the different implementations. The results published by AIDA and Cincom have a significant difference between them.

Anyway is important that we put "benchmarking" over the table.

See you in Smalltalks 2007.

Labels: ,

1 Comments:

At 8:36 am, Blogger Janko MivÅ¡ek said...

Esteban,

I very agree that we need to define a common test case and setup the tool in the same way, but before that we also need to understand the tool and what are we actually measure.

Currently I have a feeling that we just blindly follow some default settings and then interpret results which probably mean something different as we think. So, understanding the test is a first step to interpreting results correctly.

 

Post a Comment

<< Home

web site analysis