Geography, asked by upasanasaxena2008, 5 months ago

what is the dir of a
what is the dir of a \times b

Answers

Answered by abhinavmourya
0

hi pls answer my questions it's very urgent

Answered by pari2008chitra6153
2

Answer:

File sort utility, often used as a filter in a pipe. This command sorts a text stream or file forwards or backwards, or according to various keys or character positions. Using the -m option, it merges presorted input files. The info page lists its many capabilities and options. See Example 11-10, Example 11-11, and Example A-8.

tsort

Topological sort, reading in pairs of whitespace-separated strings and sorting according to input patterns. The original purpose of tsort was to sort a list of dependencies for an obsolete version of the ld linker in an "ancient" version of UNIX.

The results of a tsort will usually differ markedly from those of the standard sort command, above.

uniq

This filter removes duplicate lines from a sorted file. It is often seen in a pipe coupled with sort.

cat list-1 list-2 list-3 | sort | uniq > final.list

# Concatenates the list files,

# sorts them,

# removes duplicate lines,

# and finally writes the result to an output file.

The useful -c option prefixes each line of the input file with its number of occurrences.

bash$ cat testfile

This line occurs only once.

This line occurs twice.

This line occurs twice.

This line occurs three times.

This line occurs three times.

This line occurs three times.

bash$ uniq -c testfile

1 This line occurs only once.

2 This line occurs twice.

3 This line occurs three times.

bash$ sort testfile | uniq -c | sort -nr

3 This line occurs three times.

2 This line occurs twice.

1 This line occurs only once.

The sort INPUTFILE | uniq -c | sort -nr command string produces a frequency of occurrence listing on the INPUTFILE file (the -nr options to sort cause a reverse numerical sort). This template finds use in analysis of log files and dictionary lists, and wherever the lexical structure of a document needs to be examined.

Example 16-12. Word Frequency Analysis

#!/bin/bash

# wf.sh: Crude word frequency analysis on a text file.

# This is a more efficient version of the "wf2.sh" script.

# Check for input file on command-line.

ARGS=1

E_BADARGS=85

E_NOFILE=86

if [ $# -ne "$ARGS" ] # Correct number of arguments passed to script?

then

echo "Usage: `basename $0` filename"

exit $E_BADARGS

fi

if [ ! -f "$1" ] # Check if file exists.

then

echo "File \"$1\" does not exist."

exit $E_NOFILE

fi

########################################################

# main ()

sed -e 's/\.//g' -e 's/\,//g' -e 's/ /\

/g' "$1" | tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr

# =========================

# Frequency of occurrence

# Filter out periods and commas, and

#+ change space between words to linefeed,

#+ then shift characters to lowercase, and

#+ finally prefix occurrence count and sort numerically.

# Arun Giridhar suggests modifying the above to:

# . . . | sort | uniq -c | sort +1 [-f] | sort +0 -nr

# This adds a secondary sort key, so instances of

#+ equal occurrence are sorted alphabetically.

# As he explains it:

# "This is effectively a radix sort, first on the

#+ least significant column

#+ (word or string, optionally case-insensitive)

#+ and last on the most significant column (frequency)."

#

# As Frank Wang explains, the above is equivalent to

#+ . . . | sort | uniq -c | sort +0 -nr

#+ and the following also works:

#+ . . . | sort | uniq -c | sort -k1nr -k

########################################################

exit 0

# Exercises:

# ---------

# 1) Add 'sed' commands to filter out other punctuation,

#+ such as semicolons.

# 2) Modify the script to also filter out multiple spaces and

#+ other whitespace.

bash$ cat testfile

This line occurs only once.

This line occurs twice.

This line occurs twice.

This line occurs three times.

This line occurs three times.

This line occurs three times.

bash$ ./wf.sh testfile

6 this

6 occurs

6 line

3 times

3 three

2 twice

1 only

1 once

expand, unexpand

The expand filter converts tabs to spaces. It is often used in a pipe.

The unexpand filter converts spaces to tabs. This reverses the effect of expand.

cut

A tool for extracting fields from files. It is similar to the print $N command set in awk, but more limited. It may be simpler to use cut in a script than awk. Particularly important are the -d (delimiter) and -f (field specifier) options.

Using cut to obtain a listing of the mounted filesystems:

cut -d ' ' -f1,2 /etc/mtab

Using cut to list the OS and kernel version:

Similar questions