STAG-FLATTEN(1p) User Contributed Perl Documentation STAG-FLATTEN(1p)NAME
stag-flatten - turns stag data into a flat table
SYNOPSIS
stag-flatten -c name -c person/name dept MyFile.xml
DESCRIPTION
reads in a file in a stag format, and 'flattens' it to a tab-delimited table format. given this data:
(company
(dept
(name "special-operations")
(person
(name "james-bond"))
(person
(name "fred"))))
the above command will return a two column table
special-operations james-bond
special-operations fred
If there are multiple values for the columns within the node, then the cartesian product will be calculated
USAGE
stag-flatten [-p PARSER] [-c COLS] [-c COLS] NODE <file>
ARGUMENTS
-p|parser FORMAT
FORMAT is one of xml, sxpr or itext
xml assumed as default
-c|column COL1,COL2,COL3,..
the name of the columns/elements to write out
this can be specified either with multiple -c arguments, or with a comma-seperated (no spaces) list of column (terminal node) names
after a single -c
-n|nest
if set, then the output will be a compress repeating values into the same row; each cell in the table will be enclosed by {}, and will
contain a comma-delimited set of values
SEE ALSO
Data::Stag
perl v5.10.0 2008-12-23 STAG-FLATTEN(1p)
Check Out this Related Man Page
STAG-GREP(1p) User Contributed Perl Documentation STAG-GREP(1p)NAME
stag-grep - filters a stag file (xml, itext, sxpr) for nodes of interest
SYNOPSIS
stag-grep person -q name=fred file1.xml
stag-grep person 'sub {shift->get_name =~ /^A*/}' file1.xml
stag-grep -p My::Foo -w sxpr record 'sub{..}' file2
USAGE
stag-grep [-p|parser PARSER] [-w|writer WRITER] NODE -q tag=val FILE
stag-grep [-p|parser PARSER] [-w|writer WRITER] NODE SUB FILE
stag-grep [-p|parser PARSER] [-w|writer WRITER] NODE -f PERLFILE FILE
DESCRIPTION
parsers an input file using the specified parser (which may be a built in stag parser, such as xml) and filters the resulting stag tree
according to a user-supplied subroutine, writing out only the nodes/elements that pass the test.
the parser is event based, so it should be able to handle large files (although if the node you parse is large, it will take up more
memory)
ARGUMENTS
-p|parser FORMAT
FORMAT is one of xml, sxpr or itext, or the name of a perl module
xml assumed as default
-w|writer FORMAT
FORMAT is one of xml, sxpr or itext, or the name of a perl module
-c|count
prints the number of nodes that pass the test
-filterfile|f
a file containing a perl subroutine (in place of the SUB argument)
-q|query TAG1=VAL1 -q|query TAG2=VAL2 ... -q|query TAGN=VALN
filters based on the field TAG
other operators can be used too - eg <, <=, etc
multiple q arguments can be passed in
for more complex operations, pass in your own subroutine, see below
SUB a perl subroutine. this subroutine is evaluated evry time NODE is encountered - the stag object for NODE is passed into the subroutine.
if the subroutine passes, the node will be passed to the writer for display
NODE
the name of the node/element we are filtering on
FILE
the file to be parser. If no parser option is supplied, this is assumed to a be a stag compatible syntax (xml, sxpr or itext);
otherwise you should parse in a parser name or a parser module that throws stag events
SEE ALSO
Data::Stag
perl v5.10.0 2008-12-23 STAG-GREP(1p)
my input file contains thousands of lines like below
234A dept of education
9788 dept of commerce
8677 dept of engineering
How do i add a delimeter ':' after FIRST 4 CHARACTERS in a line
234A:dept of education
9788:dept of commerce
8677:dept of engineering (7 Replies)
Hi All,
You have a very large file, named 'ColCheckMe', tab-delimited, that you are asked to process. You are told that each line in 'ColCheckMe' has 7 columns, and that the values in the 5th column are integers. Using shell functions (and standard LINUX/UNIX filters), indicate how you would... (7 Replies)
Hi,
I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g.
COL1 COL2 COL3
A 1234 1234
B 3k32 2322
C Xk32 TTT
A NEW XX22
B 3k32 ... (7 Replies)
Hi,
I have a Comma seperated file of 20 columns, 2 of the column has names as "Name, Last " and "Name, First " . I want to count the no of Columns. to see if we recieved all the 20 columns or not.
when i am using the below code its giving me an error. Please Help
`head -1 Detail_$FILE_NAME... (9 Replies)
Hi,
I came across a very good script to convert a comma seperated to pipe delimited file in this forum. the script serves most of the requirement but looks like it does not handle embedded double quotes and commas i.e if the input is like
1234, "value","first,second", "LDC5"monitor",... (15 Replies)
Hi,
I am having a file which is fix length and comma seperated. And I want to replace values for one column.
I am reading file line by line in variable $LINE and then replacing the string.
Problem is after changing value and writing new file temp5.txt, formating of original file is getting... (8 Replies)
I have a comma (,) delimited file.
106232145,"medicare","medicare,medicaid",789
I would like to count the number of fields in each line.
I tried the below code
awk -F ',' '{print NF-1}'
This returns me the result as 5 instead of 4. This is because the awk takes... (9 Replies)
Hello, I have a comma seperated data sheet with multiple fields of biological data. One column contains the ID name of the sample, where there could be more than one sample separated by a comma. I would like a script that reads this field, and for each sample ID, copies the entire line and writes... (18 Replies)
Hi,
I am trying to show my list, from a simple list format to a table (row and column formatted table)
Currently i have this format in my output (the formart it will always be like this ) >> first 3 lines must be on the same line aligned, and the next 3 shud be on 2nd line....:
INT1:... (10 Replies)
Hi,
I want to update the file with a value at a particular position
$cat test.txt
COL1=TEST
COL2=
COL3=AADSDFSDFDSFDFDF
I want to update the file with a value for COL2.
After update, the file should be like this
$cat test.txt
COL1=TEST
COL2=1
COL3=AADSDFSDFDSFDFDF
here 1... (9 Replies)
Hi....I have a xml file which is having lots of special characters which I need to find out and put the distinct list of those into a text file. The list of special characters is not specific, it can be anything at different point of time.
Can anyone help me to find out the same and list out?
... (10 Replies)
Hi All, i believe this is not very efficient. another method would be appreciated for these. basically i read a file with tab delimited column and pass the column to another perl script.
while read line
do
timestamp=`echo "$line"|awk -F"\t" '{print $1}'`
severity=`echo... (15 Replies)
If I am searching for AA then then BB in a loop, how do I make the output always contain 6 columns of comma separated data even when there may only be 4 search matches?
AA11
AA12
AA13
AA14
BB11
BB12
BB13
BB14
BB15
BB16
Final output:
AA11,AA12,AA13,AA14,,,... (14 Replies)
Hi, I have a rquirement in unix as below .
I have a text file with me seperated by | symbol and i need to generate a excel file through unix commands/script so that each value will go to each column.
ex:
Input Text file:
1|A|apple
2|B|bottle
excel file to be generated as output as... (9 Replies)
So I have a large amount of comma delimited data that looks like this:
30.498001,-87.881412,0.024958
30.498001,-87.881412,0.035684
30.498001,-87.881412,,0.026
34.758781,-87.650562,0.034292
34.758781,-87.650562,0.029458
32.498567,-86.136587,0.045458
32.498567,-86.136587,0.036292... (8 Replies)