ben_sorensen
New Member
- Joined
- Jun 11, 2015
- Messages
- 44
So I have a file that I have been working with. I append to files together, no problem, 85,000 rows.
Then I go back into the query and merge a file in, essentially a vLookup on country codes into country names, no problem, still 85,000 rows.
Then I go and do another merge, another vLookup, essentially taking all the labels in one category and reassigning them a more generic label in another column for easier less-granular analysis, and suddenly I have 87,000 rows of data.
I am trying to understand where the disconnect was that created those copies of rows of data. I know that I can do a remove duplicates but want to understand where I am doing something wrong.
Thanks for your help.
Best
Ben
Then I go back into the query and merge a file in, essentially a vLookup on country codes into country names, no problem, still 85,000 rows.
Then I go and do another merge, another vLookup, essentially taking all the labels in one category and reassigning them a more generic label in another column for easier less-granular analysis, and suddenly I have 87,000 rows of data.
I am trying to understand where the disconnect was that created those copies of rows of data. I know that I can do a remove duplicates but want to understand where I am doing something wrong.
Thanks for your help.
Best
Ben