All,
Hopefully this is the correct forum for this question.
I am working with some older systems and trying to automate some data collection. So far, for background, my path is:
(1) there are (14) .txt files that are saved daily from our production floor (one file per machine) - saved to a shared folder
(2) I have a macro written that grabs one days worth of .txt files and pulls them into an Excel sheet
(3) I have an index function written to put all of the days data into a traditional data format
(4) the macro then imports the days data into an Access table for storage and manipulation
The data are pulled correctly into my Access table. The data are formatted into six fields:
(1) ID (autonumber, primary key)
(2) Date field (short date format)
(3) Machine number (short text format)
(4) Test type (short text format)
(5) Part number (short text format)
(6) Result (number, double format)
Each day, for each machine, there are multiple parts tested and different test results for each, IE
9/18/2017, Machine 21, Load Test, Part A1, 6.821
9/18/2017, Machine 21, Load Test, Part A2, 6.832
etc and then
9/18/2017, Machine 21, Lateral Test, Part A1, 32.23
9/18/2017, Machine 21, Lateral Test, Part A2, 32.34
etc
The issue is that because moving the data from .txt through Excel to Access is pretty manual, I could have duplicate entries in my Access table.
So, I created a 'Find Duplicates Query' with Date, Machine, Test, and Part all being duplicated - this query finds the duplicates well. I then modified this to a delete query - and it does its job (lol).
However, the issue is that I lose all the data that are duplicated. I need to be able to keep one set of the data (first, last. or other, doesn't matter). However, I am not sure how to do it.
Sorry for all the background, but I wasnt sure if there was a more elegant or better way to do this.
Thanks in advance for your time in reviewing this!
Andy
Hopefully this is the correct forum for this question.
I am working with some older systems and trying to automate some data collection. So far, for background, my path is:
(1) there are (14) .txt files that are saved daily from our production floor (one file per machine) - saved to a shared folder
(2) I have a macro written that grabs one days worth of .txt files and pulls them into an Excel sheet
(3) I have an index function written to put all of the days data into a traditional data format
(4) the macro then imports the days data into an Access table for storage and manipulation
The data are pulled correctly into my Access table. The data are formatted into six fields:
(1) ID (autonumber, primary key)
(2) Date field (short date format)
(3) Machine number (short text format)
(4) Test type (short text format)
(5) Part number (short text format)
(6) Result (number, double format)
Each day, for each machine, there are multiple parts tested and different test results for each, IE
9/18/2017, Machine 21, Load Test, Part A1, 6.821
9/18/2017, Machine 21, Load Test, Part A2, 6.832
etc and then
9/18/2017, Machine 21, Lateral Test, Part A1, 32.23
9/18/2017, Machine 21, Lateral Test, Part A2, 32.34
etc
The issue is that because moving the data from .txt through Excel to Access is pretty manual, I could have duplicate entries in my Access table.
So, I created a 'Find Duplicates Query' with Date, Machine, Test, and Part all being duplicated - this query finds the duplicates well. I then modified this to a delete query - and it does its job (lol).
However, the issue is that I lose all the data that are duplicated. I need to be able to keep one set of the data (first, last. or other, doesn't matter). However, I am not sure how to do it.
Sorry for all the background, but I wasnt sure if there was a more elegant or better way to do this.
Thanks in advance for your time in reviewing this!
Andy