Okay, so you all have been amazing! I have learned a great deal I never knew and refreshed some old skills I forgot I had. I have now compiled my data to a point that I need further review by another department. After sorting, consolidating and xlookups. I have a table, however column A (ID1) is a unique ID and column B (ID2) is supposed to be unique to a different system but not all the data presented was accurate, so I have duplicates in column b. This is ok but will have to be manually reviewed by a different department. The logic I need is to look qt each row, if column b is not duplicated the move to the next. For all rows with the same ID in column B, I need a new table or filter out singles so I can select data. I hope this makes sense.
ID1 | ID2 |
120440 | 12528 |
120830 | 12531 |
120385 | 12543 |
120387 | 12849 |
110495 | 2248 |
150005 | 12979 |
150006 | 2579 |
150007 | 2578 |
110506 | 2578 |
120059 | 12812 |
120055 | 12812 |
120056 | 12812 |
120057 | 12812 |
120058 | 12812 |
659073 | 12556 |
120790 | 12556 |
120527 | 12553 |
120792 | 12556 |
120891 | 12554 |
120880 | 12563 |
650347 | 2100 |
120467 | 12766 |
110006 | 9420 |
110017 | 9408 |