Computer Science, asked by zoey17, 1 month ago

differentiate between average and min functions in computer science.​

Answers

Answered by prash7777
1

Zoey your answer

Answer:

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

The MIN function returns the lowest value from a set of values. The MAX function returns the highest value from a set of values.

=MIN(A2:A20)

=MAX(A2:A20)

Answered by luvssamantha1617
2

Answer:

Most people use them interchangeably, even though they’re not the same thing.

“Average” is simply the value in a data set that is the most likely to be expected. There are a number of ways to arrive at that value, with “mean” being only one of them. The simple arithmetic mean is arrived at by taking the sum of all the data points and then dividing by the total number of data points. Most of the time, this is what people do.

However… there are cases where it shouldn’t be done and that would be in cases where there are large outliers. The classic illustration is a story told this way: Let’s say that 4 guys are drinking at a bar. Their annual incomes are $40000, $50000, $60000, and $70000. Their average income (using the mean) is $55000. Now suppose Bill Gates is driving by and he asks his limo driver to drop him off for a beer and a bump. Last I heard, he earns $8 billion a year. If you calculate the average now, you’ll get that each patron of the bar is now worth $1.6 billion a year. Mathematically correct, but also misleading! In cases like this where the data sets are small and there’s big outliers (what the statisticians call a “non normal distribution), it’s more appropriate to use the median instead - that’s the value that reflects the midpoint - in this case, it’s the guy that makes $60000 a year. There is also the mode (used less commonly) which is a tally of the number that occurs most frequently.

So… to a purist, the mean and average aren’t the same. There is a difference and sometimes the difference matters. But… most of the time people use the two terns as though they were the same thing and most of the time is works out being OK.

Explanation:

READ IT FIRST AND ANSWER IT

Similar questions