I need to extract duplicate rows from a file and write these bad records into another file. And need to have a count of these bad records.
i have a command
but this doesnt solve my problem.
HTML Code:
Input:
A
A
A
B
B
C
HTML Code:
Desired Output:
A
A
B
Count of bad records=3
But when i run my script i get out put as:
A
B
Count of bad records=2. Which is not true.
As always any help appreciated.
I don't see the need for the END clause for this problem. Doesn't:
produce the same output?
When reading records, if the record has been seen more than one time, print it then.
But, looking at it again, this is the same as the script you initially provided that you said was not working.
If what you want is the input lines that are not duplicated that would be:
which produces the output:
which is not what was originally requested.
If there is only one word on each input line, and you want to print lines that are duplicates of previous lines (ignoring leading whitespace), try:
which produces the output:
but this still isn't the output originally requested. Please explain in more detail what it is that you want AND give us sample input and output that match your description.
Last edited by Don Cragun; 03-08-2013 at 03:36 PM..
Reason: Noticed that output doesn't match original request...
I have a file with 48 rows. I am counting 6 rows and adding 6 to that number and repeating the operation, and then output the value in column 1. For the second column, I would like to get sort of a binary output (1s and 2s) every 3rd row. This is what I have:
awk '{print ++src +... (1 Reply)
Hello
I have a file like this:
> cat examplefile
ghi|NN603762|eee
mno|NN607265|ttt
pqr|NN613879|yyy
stu|NN615002|uuu
jkl|NN607265|rrr
vwx|NN615002|iii
yzA|NN618555|ooo
def|NN190486|www
BCD|NN628717|ppp
abc|NN190486|qqq
EFG|NN628717|aaa
HIJ|NN628717|sss
>
I can sort the file by... (5 Replies)
Hi,
I need a solaris shell script to read multiple files and count number of unique name rows(strings) from those files. The input and output should be like this
Input:
file 1
abc
cde
abc ... (9 Replies)
Hi! I have a file as below:
line1
line2
line2
line3
line3
line3
line4
line4
line4
line4
I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below:
line2
line2
I would appreciate if anyone can help. Thanks. (4 Replies)
Could anybody help with this?
I have input below .....
david,39
david,39
emelie,40
clarissa,22
bob,42
bob,42
tim,32
bob,39
david,38
emelie,47
what i want to do is count how many names there are with different ages, so output would be like this ....
david,2
emelie,2
clarissa,1... (3 Replies)
Hi experts a have a very large file and I need to add two columns: the first one numbering the incidence of records and the another with the total count
The input file:
21 2341 A
21 2341 A
21 2341 A
21 2341 C
21 2341 C
21 2341 C
21 2341 C
21 4567 A
21 4567 A
21 4567 C
... (6 Replies)
Hi All,
I have the following input which i want to process using AWK.
Rows,NC,amount
1,1202,0.192387
2,1201,0.111111
3,1201,0.123456
i want the following output
count of rows = 3 ,sum of amount = 0.426954
Many thanks (2 Replies)
I have searched the internet for duplicate row extracting.
All I have seen is extracting good rows or eliminating duplicate rows.
How do I extract duplicate rows from a flat file in unix.
I'm using Korn shell on HP Unix.
For.eg.
FlatFile.txt
========
123:456:678
123:456:678
123:456:876... (5 Replies)
I have a input file with formating:
6000000901 ;36200103 ;h3a01f496 ;
2000123605 ;36218982 ;heefa1328 ;
2000273132 ;36246985 ;h08c5cb71 ;
2000041207 ;36246985 ;heef75497 ;
Each fields is seperated by semi-comma. Sometime, the second files is... (6 Replies)