Quantcast
Channel: VMware Communities : Popular Discussions - VMware Server Archives
Viewing all articles
Browse latest Browse all 69891

holy cow!

$
0
0

want to see something pretty damn impressive??

 

real freebsd box, pentium 4 1.8GHz, 1.5GB ram, single 120G disk on a standard IDE channel.  here is its 'diskinfo -ctv':

 

pollux# diskinfo -ctv /dev/ad1

/dev/ad1

        512             # sectorsize

        120034123776    # mediasize in bytes (112G)

        234441648       # mediasize in sectors

        232581          # Cylinders according to firmware.

        16              # Heads according to firmware.

        63              # Sectors according to firmware.

 

I/O command overhead:

        time to read 10MB block      0.182131 sec       =    0.009 msec/sector

        time to read 20480 sectors   1.934665 sec       =    0.094 msec/sector

        calculated command overhead                     =    0.086 msec/sector

 

Seek times:

        Full stroke:      250 iter in   5.622271 sec =   22.489 msec

        Half stroke:      250 iter in   4.339111 sec =   17.356 msec

        Quarter stroke:   500 iter in   6.959858 sec =   13.920 msec

        Short forward:    400 iter in   2.147981 sec =    5.370 msec

        Short backward:   400 iter in   1.184482 sec =    2.961 msec

        Seq outer:       2048 iter in   0.213627 sec =    0.104 msec

        Seq inner:       2048 iter in   0.217149 sec =    0.106 msec

Transfer rates:

        outside:       102400 kbytes in   1.791294 sec =    57165 kbytes/sec

        middle:        102400 kbytes in   2.167648 sec =    47240 kbytes/sec

        inside:        102400 kbytes in   3.465312 sec =    29550 kbytes/sec

 

here is a vmware guest, also FreeBSD, on a Linux Host (dual xeon 2.66) with a SATA RAID0 partition:

 

regor# diskinfo -cvt /dev/da0

/dev/da0

        512             # sectorsize

        9663676416      # mediasize in bytes (9.0G)

        18874368        # mediasize in sectors

        1174            # Cylinders according to firmware.

        255             # Heads according to firmware.

        63              # Sectors according to firmware.

 

I/O command overhead:

        time to read 10MB block      0.208058 sec       =    0.010 msec/sector

        time to read 20480 sectors   4.980273 sec       =    0.243 msec/sector

        calculated command overhead                     =    0.233 msec/sector

 

Seek times:

        Full stroke:      250 iter in   4.704609 sec =   18.818 msec

        Half stroke:      250 iter in   4.178791 sec =   16.715 msec

        Quarter stroke:   500 iter in   5.587117 sec =   11.174 msec

        Short forward:    400 iter in   0.596792 sec =    1.492 msec

        Short backward:   400 iter in   1.815755 sec =    4.539 msec

        Seq outer:       2048 iter in   0.556478 sec =    0.272 msec

        Seq inner:       2048 iter in   0.587404 sec =    0.287 msec

Transfer rates:

        outside:       102400 kbytes in   1.649082 sec =    62095 kbytes/sec

        middle:        102400 kbytes in   1.691255 sec =    60547 kbytes/sec

        inside:        102400 kbytes in   1.899930 sec =    53897 kbytes/sec

 

now i realize, that the xeon box, is just plain faster.  were talking xeons vs p4, and SATA vs IDE.  but i never... NEVER would have guess that a guest machine could still outperform the real thing.

 

is there a bowdown smilie?

 

i had the WORST i/o experienced witih MS VS2005, and i punished myself with that product for over a year.  i WISH i had moved to vmware-server while it was still in beta.  kudos to the vmware team for a truly superior product!!


Viewing all articles
Browse latest Browse all 69891

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>