I am not sure what I am doing wrong here. Can you please help me to get awk to read the LineNumbers. So I thought that awk would be a faster option. Your code does not work because for each line number read from LineNumbers.
Additionally, parsing a whole file just to change a single line, and then doing that many times, is extremely inefficient, and doubly so when the modification to the lines is identical. FNR and NR are two special variables in awk that holds the current record number line number by default of the current file, and the over all number of records lines read so far.
For the first input file, these two values will be the same, and when they are, we store the line number as a key in the associative array lineno , and then skip to the next line. If so, it is changed to.
A totally different approach is to use sed to create a sed script to do the necessary changes. Don't do this as it's a bad approach in multiple ways but FYI if you were to use a shell loop like in your question then you'd write it as:. You don't need to escape regexp metacharacters in strings, just in regexps.
All you need to write there is ". See Why is using a shell loop to process text considered bad practice? There is the possibility of reading the lines from the file with the line numbers directly into a variable in awk via getline assuming line numbers are sorted :.
Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Read file line by line with awk to replace characters in certain line numbers Ask Question. Asked 1 year, 8 months ago. Active 1 year, 8 months ago. Viewed 2k times. Many thanks! Improve this question. HamB P. HamB 35 5 5 bronze badges.
Add a comment. Active Oldest Votes. All we have to do is to create expressions like that for each line number n : sed 's. Improve this answer. Sounds great. At the moment it would not work for me because of the exact line match. I like the toothpick syndrome :D.
Always ask about exactly what you need. Update the question if you have further requirements I won't update my answer as it would not be in line with the question otherwise. Nice, thats great. Viewed 7k times. I have a script that read log files and parse the data to insert them to mysql table..
Improve this question. I don't think awk will give you any significant improvement in time.. Use another language. I have replace bash scripts with Perl a couple times for long tasks and the difference was enormous. Shell is slow. Sundeep :Please note that I have used significant in my comment.
For larger files perl is suggested. Also, the link you pointed out doesn't actually make a comparison between tools. It just discusses ups and downs of a practice. Show 1 more comment. Active Oldest Votes. To answer your question, I assume the following rules of the game: each line contains various variables each variable can be found by a different delimiter.
Improve this answer. OK I found my answer at stackoverflow. Glad to see you found a solution. If you use getline and you have a lot of the same CIP values, it might be useful to buffer the results to speedup to program. This would buffer the result, i. If the value already exists, don't execute geoiiplookup anymore but just pick the result from buffer[CIP] — kvantour. Show 2 more comments. Add a comment. Sign up or log in Sign up using Google. Sign up using Facebook.
0コメント