This commit is contained in:
root
2024-04-24 10:25:44 +08:00
parent 627bf43ee3
commit 248388a322
5271 changed files with 3753425 additions and 803 deletions

View File

@@ -0,0 +1,40 @@
Access 97 tested through ODBC 1998.04.19, by monty@mysql.com
Access 97 has a bug when on executes a SELECT follwed very fast with a
DROP TABLE or a DROP INDEX command:
[Microsoft][ODBC Microsoft Access 97 Driver] The database engine couldn't lock table 'crash_q' because it's already in use by another person or process. (SQL-S1
000)(DBD: st_execute/SQLExecute err=-1)
Debugging SQL queries in Access 97 is terrible because most error messages
are of type:
Error: [Microsoft][ODBC Microsoft Access 97 Driver] Syntax error in CREATE TABLE statement. (SQL-37000)(DBD: st_prepare/SQLPrepare err=-1)
Which doesn't tell a thing!
--------------
Access 2000 tested through ODBC 2000.01.02, by monty@mysql.com
crash-me takes a LONG time to run under Access 2000.
The '1+NULL' and the 'OR and AND in WHERE' tests kills
Activestate Perl, build 521, DBI-DBC with an OUT OF MEMORY error.
The later test also kills perl/access with some internal errors.
To go around this one must run crash-me repeatedly with the --restart option.
Testing of the 'constant string size' (< 500K) takes a LOT of memory
in Access (at least 250M on My computer).
Testing of number of 'simple expressions' takes REALLY a lot of time
and memory; At some point I was up to 350M of used memory!
To fix the above, I modified crash-me to have lower max limits in the
above tests.
Benchmarks (under Win98):
Running the connect-test will take up all available memory and this
will not be freed even after quitting perl! There is probably some
bug in the Access connect code that eats memory!

View File

@@ -0,0 +1,36 @@
I did not spend much time for tuning crash-me or the limits file. In short,
here's what I did:
- Put engine into ANSI SQL mode by using the following odbc.ini:
[ODBC Data Sources]
test
[test]
ServerDB=test
ServerNode=
SQLMode=3
- Grabbed the db_Oracle package and copied it to db_Adabas
- Implemented a 'version' method.
- Ran crash-me with the --restart option; it failed when guessing the
query_size.
- Reran crash-me 3 or 4 times until it succeeded. At some point it
justified its name; I had to restart the Adabas server in the
table name length test ...
- Finally crash-me succeeded.
That's it, folks. The benchmarks have been running on my P90 machine,
32 MB RAM, with Red Hat Linux 5.0 (Kernel 2.0.33, glibc-2.0.7-6).
Mysql was version 3.21.30, Adabas was version 6.1.15.42 (the one from
the promotion CD of 1997). I was using X11 and Emacs while benchmarking.
An interesting note: The mysql server had 4 processes, the three usual
ones and a process for serving me, each about 2 MB RAM, including a
shared memory segment of about 900K. Adabas had 10 processes running from
the start, each about 16-20 MB, including a shared segment of 1-5 MB. You
guess which one I prefer ... :-)
Jochen Wiedmann, joe@ispsoft.de

View File

@@ -0,0 +1,102 @@
*****************************************************************
NOTE:
This is an old comment about how it was to run crash-me on empress
the first time. I think it was on Empress 6.0
*****************************************************************
start testing empress ...
added a nice line for the max join ....
strip the as out of the from field ...
that's working on empress ....
at this moment with ....
max constant string size in where .... taking a lot of memory ...
at this moment (it's still growing just waiting till it stops ..) 99mb ..
sorry it started growing again ...
max 170 mb ... then it gives an error ...
Yes it crashed .....
at max constant string size in where ... with IOT trap/Abort(core dumped) :-)
nice isn't it ... hope it saved the things ....
I outcommented the sig story because I could see how the script is running
and I wasn't sure if SIG{PIPE} ='DEFAULT' ... is working ...
restarting with limit 8333xxx ... couldn't see it any more ...
query is printed ...(200000 lines ..). mmm Nice IOT trap/Abort ...
and again ..and again ...
aha ... and now it's going further ...
max constant string string size in select: ...
taking 100 mb
crashing over and over again ....
max simple expressions ...
is taking ... 82 mb ...
mmmm this is taking very very very long .... after 10 minutes I will kill it and run it again ... I think he can't proces this query that fast ... and will crash any way ...
still growing very slow to the 90 mb ...
killed it ... strange is ... it don't react on ctrl-c ... but kill 15 does work
mmm still bussy with killing his self ... memory is growing to 128 mb ...
sorry .. 150 mb .. and then the output ..
maybe something for the extra things for crash-me ...
if debug ....
if length $query > 300 ... just print $errstr .. else print $query + $errstr ..
at this moment he is still bussy printing ....
first clear all locks ... with empadm test lockclear ... else it will give me
the error with a lock ...
restarting at 4194297 .... mmm a bit high I think ...
after 5 minutes I will kill it ...
mmm have to kill it again ... took 30 mb ..now growing to 42 mb ..
restarting at 838859 ... hope this will crash normaly ... :-)
I will give it again 5 minutes to complete ...
taking 12 mb .... will kill it ... after 4 minutes ....
restarting at 167771 ... taking 6 mb ... give it again 5 minutes ....
will kill it again ... else it becomes to late tonight ...
mmm started with 33xxxx and it crashes ...:-) yes ...
can't we build in a function which will restart his self again ...
mmmm this is really boring .. start it over and over again ...
WHO .... NICE >>>>
Restarting this with high limit: 4097
.................
*** Program Bug *** setexpr: unknown EXPR = 1254 (4e6)
isn't it ... starting it again ...
finally finished with 4092 ....
now max big expression .....
directly taking .. 85 mb ... give it again 5 minutes ...
mmm I am going to kill it again ... mmm it grows to 146 mb ...
restarting with 1026 ... taking 25 mb ..
won't give him that long ... because it will crash any way (just a ques) ..
killed it ...
restarting at 205 ... hope this will work ....
won't think so ... give it 2 minutes ... taking 12 mb ...
killed it ...restarting at ... 40 ... yes it crashes ...
7 is crashing ... 1 ....is good .. finaly ... a long way ...
now max stacked expressions ....
taking 80 mb ... mmmm what sort of test is this ...it looks more like a harddisk test .. but it crashes .. nice ...
mmm a YACC overflow ... that's a nice error ...
but it goes on ... yep it didn't crashed just an error ...
mmm
my patch for the join didn't work ... let's take a look what goes wrong ...
saw it ... forgot some little thing .. mm not .. them ... another little typo
mmm again a really nice bug ...
Restarting this with high limit: 131
...
*** Program Bug *** xflkadd: too many read locks
them the lock forgotten ....
mmmm bigger problem ...
with empadm test lockinfo ... gives ...
*** System Problem *** no more clients can be registered in coordinator
*** User Error *** '/usr/local/empress/rdbms/bin/test' is not a valid database
that's really really nice ....
hmmm after coordclear ... it's fine again ...
strange ...
after restarting it again the script ... it is going further ....
the overflow trick is nice and working good ...
now I have table 'crash_q' does not exist for every thing ...
normal ...???? mmm went after all good .. so I think it's normal ...
mmmm a lot of table 'crash_q' does not exist ... again ...
sometimes when the overflow is there ... I restart it and it is saying ...
restarting at xxxx that's not good ... but hey ... what the hack ...
maybe that's good because if one test run's more then 200 times ....
it won't exceeds that test ...
....
yes finally the end of crash-me ...
at last ... crash-me safe: yes ...
yep don't think so he ....

View File

@@ -0,0 +1,59 @@
# This file describes how to run benchmarks and crash-me with FrontBase
Installed components:
- FrontBase-2.1-8.rpm
(had to run with rpm -i --nodeps; the rpm wanted libreadline.so.4.0,
but only libreadline.so.4.1 was available)
- DBD-FB-0.03.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
- DBI-1.14.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
- Msql-Mysql-modules-1.2215.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
After installations:
- cd /etc/rc.d
FBWeb start
FrontBase start
- cd /usr/local/mysql/sql-bench
- FBExec &
- FrontBase test
crash-me:
There were a lot of troubles running the crash-me; FrontBase core
dumped several tens of times while crash-me was trying to determine
the maximum values in different areas.
The crash-me program itself was also needed to be tuned quite a lot
for FB. There were also some bugs/lacking features in the crash-me
program, which are now fixed to the new version.
After we finally got the limits, we runned the benchmarks.
benchmarks:
Problems again. Frontbase core dumped with every part of the
benchmark (8/8) tests. After a lot of fine-tuning we got the
benchmarks to run through. The maximum values had to be dropped
down a lot in many of the tests.
The benchmarks were run with the following command:
perl run-all-tests --server=frontbase --host=prima
--cmp=frontbase,mysql --tcpip --log

View File

@@ -0,0 +1,26 @@
*****************************************************************
NOTE:
I, Monty, pulled this comment out from the public mail I got from
Honza when he published the first crash-me run on Informix
*****************************************************************
Also attached are diffs from server-cfg and crash-me -- some of
them are actual bugs in the code, some add extensions for Informix,
some of the comment-outs were necessary to finish the test. Some of
the problematic pieces that are commented out sent Informix to
veeeery long load 1 on the machine (max_conditions for example), so
could be considered crashes, but I'd prefer that someone checks the
code before giving out such a conclusion.
Some of the code that is commented out failed with some other SQL
error message which might mean a problem with the sequence of commands
in crash-me. Interesting thing, some of the tests failed for the
first time but in the next or third run went OK, so the results are
results of more iterations (like column doesn't exist in the first
try but the second pass goes OK).
I'd like to hear your comments on the bug fixes and Informix specific
code before we go into debugging the problems.
Yours,
Honza Pazdziora

View File

@@ -0,0 +1,39 @@
# This file describes how to run MySQL benchmarks with MySQL
#
# The test was run on a Intel Xeon 2x 550 Mzh machine with 1G memory,
# 9G hard disk. The OS is Suse 6.4, with Linux 2.2.14 compiled with SMP
# support
# Both the perl client and the database server is run
# on the same machine. No other cpu intensive process was used during
# the benchmark.
#
#
# First, install MySQL from RPM or compile it according to the
# recommendations in the MySQL manual
#
# Start MySQL
bin/safe_mysqld -O key_buffer=16M &
#
# Now we run the test that can be found in the sql-bench directory in the
# MySQL 3.23 source distribution with and without --fast
#
# Note that if you want to make a results that is comparead to some database,
# You should add "--cmp=databasename" as an extra option to the test
#
$CMP=--cmp=pg
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 1G, key_buffer=16M" $CMP
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 1G, key_buffer=16M" --fast $CMP
# If you want to store the results in a output/RUN-xxx file, you should
# repeate the benchmark with the extra option --log --use-old-result
# This will create a the RUN file based of the previous results
#
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 1G, key_buffer=16M" --log --use-old-result $CMP
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 1G, key_buffer=16M" --fast --log --use-old-result $CMP

View File

@@ -0,0 +1,107 @@
# This file describes how to run MySQL benchmark suite with PostgreSQL
#
# WARNING:
#
# Don't run the --fast test on a PostgreSQL 7.1.1 database on
# which you have any critical data; During one of our test runs
# PostgreSQL got a corrupted database and all data was destroyed!
# When we tried to restart postmaster, It died with a
# 'no such file or directory' error and never recovered from that!
#
# Another time vacuum() filled our system disk with had 6G free
# while vaccuming a table of 60 M.
#
# WARNING
# The test was run on a Intel Xeon 2x 550 Mzh machine with 1G memory,
# 9G hard disk. The OS is Suse 7.1, with Linux 2.4.2 compiled with SMP
# support
# Both the perl client and the database server is run
# on the same machine. No other cpu intensive process was used during
# the benchmark.
#
# During the test we run PostgreSQL with -o -F, not async mode (not ACID safe)
# because when we started postmaster without -o -F, PostgreSQL log files
# filled up a 9G disk until postmaster crashed.
# We did however notice that with -o -F, PostgreSQL was a magnitude slower
# than when not using -o -F.
#
# First, install postgresql-7.1.2.tar.gz
# Adding the following lines to your ~/.bash_profile or
# corresponding file. If you are using csh, use ´setenv´.
export POSTGRES_INCLUDE=/usr/local/pg/include
export POSTGRES_LIB=/usr/local/pg/lib
PATH=$PATH:/usr/local/pg/bin
MANPATH=$MANPATH:/usr/local/pg/man
#
# Add the following line to /etc/ld.so.conf:
#
/usr/local/pg/lib
# and run:
ldconfig
# untar the postgres source distribution, cd to postgresql-*
# and run the following commands:
CFLAGS=-O3 ./configure
gmake
gmake install
mkdir /usr/local/pg/data
chown postgres /usr/local/pg/data
su - postgres
/usr/local/pg/bin/initdb -D /usr/local/pg/data
/usr/local/pg/bin/postmaster -o -F -D /usr/local/pg/data &
/usr/local/pg/bin/createdb test
exit
#
# Second, install packages DBD-Pg-1.00.tar.gz and DBI-1.18.tar.gz,
# available from http://www.perl.com/CPAN/
export POSTGRES_LIB=/usr/local/pg/lib/
export POSTGRES_INCLUDE=/usr/local/pg/include/postgresql
perl Makefile.PL
make
make install
#
# Now we run the test that can be found in the sql-bench directory in the
# MySQL 3.23 source distribution.
#
# We did run two tests:
# The standard test
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 512M, pg started with -o -F" --user=postgres --server=pg --cmp=mysql
# When running with --fast we run the following vacuum commands on
# the database between each major update of the tables:
# vacuum anlyze table
# vacuum table
# or
# vacuum analyze
# vacuum
# The time for vacuum() is accounted for in the book-keeping() column, not
# in the test that updates the database.
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 512M, pg started with -o -F" --user=postgres --server=pg --cmp=mysql --fast
# If you want to store the results in a output/RUN-xxx file, you should
# repeate the benchmark with the extra option --log --use-old-result
# This will create a the RUN file based of the previous results
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 512M, pg started with -o -F" --user=postgres --server=pg --cmp=mysql --log --use-old-result
run-all-tests --comment="Intel Xeon, 2x550 Mhz, 512MG, pg started with -o -F" --user=postgres --server=pg --cmp=mysql --fast --log --use-old-result
# Between running the different tests we dropped and recreated the PostgreSQL
# database to ensure that PostgreSQL should get a clean start,
# independent of the previous runs.

View File

@@ -0,0 +1,30 @@
*****************************************************************
NOTE:
This is an old comment about how it was to run crash-me on postgreSQL
the first time. I think it was on pg 6.2
*****************************************************************
mmm memory use of postgres is very very much ...
at this moment I am testing it ...
and the tables in join: is taking 200MB memory ...
I am happy to have 400mb swap ... so he can do have it ...
but other programs will give some errors ...
just a second ago ... vim core dumped .. XFree crashed full ... to the prompt
the menu bar of redhat disappeared ....
at this momemt the max is 215 mb memore postgres is taking ...
the problem with postgres is the following error:
PQexec() -- Request was sent to backend, but backend closed the channel before r
esponding. This probably means the backend terminated abnormally before or whil
e processing the request
I think we can solve this with a goto command ... to go back again ... after
the connect again ...
postgres is taking 377 mb .... mmm allmost out of memory ... 53mb left ..
mmm it's growing ... 389 mb ..393 mb ... 397 mb .. better can wait for the out of memory ... i think 409 412 max ...
ps added some nice code for the channel closing ...
it must now do again the query when the error is the above error ...
hopes this helps ...
after crashing my X again ...
I stopped testing postgres