So I've created a db that does the following:
1. Save some zip files from an FTP site to a network folder, extract the text files within, then import the text files into my db. One of the files is 500k rows and bloats my db from 580mb starting out to around 1GB (can't recall exactly).
2. Run other queries (all appending and deleting records, no more new tables) in order to minimize bloat.
3. I need to do some comparisons on the approximately 400k rows from the previous month's db so I copy and paste the prior month's table. To accomplish this, I copy the table that already exists within the db and call it prior early in the process. The reason for this instead of creating a linked table is the location of the prior month's db is subject to change. Db bloats to about 1.7GB.
4. Delete all this temp data within my db. Always manually compact & repair at the end which will bring size down to 580mb again.
The process works, but 1.7GB is a little too dangerously close to the 2GB limit for my liking. I'm thinking there is a way to accomplish #1 (and maybe even #3) with a temp db, but I don't have any experience with these. I've found an example db that shows how to create an external temp db but I'm not 100% sure how it works. Does anyone have additional resources I can study?
1. Save some zip files from an FTP site to a network folder, extract the text files within, then import the text files into my db. One of the files is 500k rows and bloats my db from 580mb starting out to around 1GB (can't recall exactly).
2. Run other queries (all appending and deleting records, no more new tables) in order to minimize bloat.
3. I need to do some comparisons on the approximately 400k rows from the previous month's db so I copy and paste the prior month's table. To accomplish this, I copy the table that already exists within the db and call it prior early in the process. The reason for this instead of creating a linked table is the location of the prior month's db is subject to change. Db bloats to about 1.7GB.
4. Delete all this temp data within my db. Always manually compact & repair at the end which will bring size down to 580mb again.
The process works, but 1.7GB is a little too dangerously close to the 2GB limit for my liking. I'm thinking there is a way to accomplish #1 (and maybe even #3) with a temp db, but I don't have any experience with these. I've found an example db that shows how to create an external temp db but I'm not 100% sure how it works. Does anyone have additional resources I can study?
Last edited: