Deductive reasoning determines whether the truth of a conclusion can be determined for that rule, based solely on the truth of the premises. Example: "When it rains, things outside get wet. The grass is outside, therefore: when it rains, the grass gets wet." Mathematical logic and philosophical logic are commonly associated with this style of reasoning. Inductive reasoning attempts to support a determination of the rule. It hypothesizes a rule after numerous examples are taken to be a conclusion that follows from a precondition in terms of such a rule. Example: "The grass got wet numerous times when it rained, therefore: the grass always gets wet when it rains." While they may be persuasive, these arguments are not deductively valid, see the problem of induction. Science is associated with this type of reasoning. Abductive reasoning, aka inference to the best explanation, selects a cogent set of preconditions. Given a true conclusion and a rule, it attempts to select some possible premises that, if true also, can support the conclusion, though not uniquely. Example: "When it rains, the grass gets wet. The grass is outside and nothing outside is dry, therefore: maybe it rained." Diagnosticians and detectives are commonly associated with this type of reasoning. Deductive reasoning, also deductive logic or logical deduction or, informally, "top-down" logic, is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion.
Deductive reasoning links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are followed, then the conclusion reached is necessarily true.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the following way: In deductive reasoning, a conclusion is reached reductively by applying general rules that hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion is left. In inductive reasoning, the conclusion is reached by generalizing or extrapolating from initial information. As a result, induction can be used even in an open domain, one where there is epistemic uncertainty. Note, however, that the inductive reasoning mentioned here is not the same as induction used in mathematical proofs – mathematical induction is actually a form of deductive reasoning. Inductive reasoning (as opposed to deductive reasoning) is reasoning in which the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a deductive argument is supposed to be certain, the truth of the conclusion of an inductive argument is supposed to be probable, based upon the evidence given.
The philosophical definition of inductive reasoning is more nuanced than simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms, discussed below).
Many dictionaries define inductive reasoning as reasoning that derives general principles from specific observations, though some sources disagree with this usage. Abductive reasoning (also called abduction,abductive inference or retroduction) is a form of logical inference that goes from an observation to a hypothesis that accounts for the observation, ideally seeking to find the simplest and most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as "inference to the best explanation".
The fields of law, computer science, and artificial intelligence research renewed interest in the subject of abduction. Diagnostic expert systems frequently employ abduction.
Hempel describes the paradox in terms of the hypothesis:
(1) All ravens are black.
In strict logical terms, via the Law of Implication, this statement is equivalent to:
(2) Everything that is not black is not a raven.
It should be clear that in all circumstances where (2) is true, (1) is also true; and likewise, in all circumstances where (2) is false (i.e. if we imagine a world in which something that was not black, yet was a raven, existed), (1) is also false. This establishes logical equivalence.
Given a general statement such as all ravens are black, we would generally consider a form of the same statement that refers to a specific observable instance of the general class to constitute evidence for that general statement. For example,
(3) Nevermore, my pet raven, is black.
is clearly evidence supporting the hypothesis that all ravens are black.
The paradox arises when this same process is applied to statement (2). On sighting a green apple, we can observe:
(4) This green (and thus not black) thing is an apple (and thus not a raven).
By the same reasoning, this statement is evidence that (2) everything that is not black is not a raven. But since (as above) this statement is logically equivalent to (1) all ravens are black, it follows that the sight of a green apple offers evidence that all ravens are black.
Can you derive general rules from observed individual facts!
David Hume, an 18th century Scottish philosopher, said "No". Hume's problem of induction can be viewed as a "Black swan problem".
How many white swans does one need to observe before inferring that all swans are white ?
Ther are two types of statements: observational and categorical.
In work beginning in the 1930s, Popper gave falsifiability a renewed emphasis as a criterion of empirical statements in science.
Popper noticed that two types of statements are of particular value to scientists.
The first are statements of observations, such as "this is a white swan". Logicians call these statements singular existential statements, since they assert the existence of some particular thing. They are equivalent to a propositional calculus statement of the form: There exists an x such that x is a swan, and x is white.
The second are statements that categorize all instances of something, such as "all swans are white". Logicians call these statements universal. They are usually parsed in the form: For all x, if x is a swan, then x is white. Scientific laws are commonly supposed to be of this type. One difficult question in the methodology of science is: How does one move from observations to laws? How can one validly infer a universal statement from any number of existential statements?
Inductivist methodology supposed that one can somehow move from a series of singular existential statements to a universal statement. That is, that one can move from 'this is a white swan', 'that is a white swan', and so on, to a universal statement such as 'all swans are white'. This method is clearly deductively invalid, since it is always possible that there may be a non-white swan that has eluded observation (and, in fact, the discovery of the Australian black swan demonstrated the deductive invalidity of this particular statement).
Answer: Popper held that science could not be grounded on such an inferential basis. He proposed falsification as a solution to the problem of induction. Popper noticed that although a singular existential statement such as 'there is a white swan' cannot be used to affirm a universal statement, it can be used to show that one is false: the singular existential observation of a black swan serves to show that the universal statement 'all swans are white' is false—in logic this is called modus tollens. 'There is a black swan' implies 'there is a non-white swan,' which, in turn, implies 'there is something that is a swan and that is not white', hence 'all swans are white' is false, because that is the same as 'there is nothing that is a swan and that is not white'.
One notices a white swan. From this one can conclude:
At least one swan is white.
From this, one may wish to conjecture:
All swans are white.
It is impractical to observe all the swans in the world to verify that they are all white.
Even so, the statement all swans are white is testable by being falsifiable. For, if in testing many swans, the researcher finds a single black swan, then the statement all swans are white would be falsified by the counterexample of the single black swan.
The Black Swan Theory is used by Nassim Nicholas Taleb to explain the existence and occurrence of high-impact, hard-to-predict, and rare events that are beyond the realm of normal expectations. Unlike the philosophical "black swan problem", the "Black Swan Theory" (capitalized) refers only to unexpected events of large magnitude and consequence and their dominant role in history. Such events are considered extreme outliers.
It is noteworthy that in his writings, Taleb never uses the term "Black Swan Theory"; instead, he refers to "Black Swan Events" (capitalized).
Based on the author's criteria:
1) The event is a surprise (to the observer).
2)The event has a major impact.
3)After the fact, the event is rationalized by hindsight, as if it had been expected.
Questions: Why in the world can I find gigantic shrimp here but not East Coast? or why the seafood in Kansas City was so good?
Answer1: We were in Kansas City, and we were at a restaurant called Stroud's, which is renowned for its fried chicken and gravy. Oh, it is to-die-for stuff! They also have on the menu fried shrimp, and it is the biggest shrimp you have ever seen. Somebody asked Professor Hazlett, "How come I don't see big shrimp like this at any restaurants close to the water? How come bigger shrimp are right here in the heartland of the country, in Kansas City?
"Why in the world can I find gigantic shrimp here but not East Coast?" Professor Hazlett said the answer to that is found in economics. Rather than answer it, he threw it to the table to see what we would come up with. The short answer of it is, in his theory, it all came back to shipping costs.
Answer2: I remember Rush Limbaugh once talking about a conversation he had with an economist about why the seafood in Kansas City was so good. He said the economist said that it was due to shipping costs. It was as expensive to send bad seafood to Kansas City as it was to send good seafood, so the best thing to do was only send the best seafood. Thus there was not much seafood available but what was there was very good and expensive.
Why do hot dogs always taste better at the ballpark?
First you have to ask " Why is food at the ballpark so ridiculously expensive? "
Answer: There are two things that affect pricing. One -- and this is the minor of the two -- is the cost of the item to the business. In baseball, the cost of the item (e.g. a hot dog) includes the cost of the stadium, the players who bring you to the stadium, etc. The more important consideration, though, is what the public will pay for an item and businesses try to set their prices at the upper end of what the purchasing public will buy. Now, they have to charge more than the cost of the item, however you define the cost, since businesses are in business to make a profit. But, there is a point where the number of purchasers and the price of the product produce the highest level of profit.
Question: "Why do hot dogs taste better at the baseball stadium than at home?"
Answer: They use the best kind of hot dogs and all the fat and grease makes them taste good.
Answer: Stadium mustard. (It's spicy).
Best Answer: Since the hot dogs at the ballpark are so ridiculously expensive, they have to make them taste really good or people won't buy.