The Basis of Morality

I have removed this from the "God vs Science" thread on reading obi's post, which I had somehow not noticed earlier, since I think that the question is too broad to be dealt with mashed in with a load of others.

Here is what I like to call the "I can't prove you wrong, but I can prove that you should shut up" argument, with the proof for the basis of the discussion of morality attached.

God's existence, or non existence, is nigh impossible to prove. Finding contradictions in the God hypothesis doesn't help much either - since it cannot be tested, it is easy to adapt and warp to suit the needs of the moment.

If people choose to believe in a God that is not supported, and possible cannot be supported, sufficiently through rational means, that is their own problem/personal choice.

However, it becomes a little different when anyone proposes the use of their God hypothesis to justify moral positions and/or acts or tries to convince someone else of the validity of their God hypothesis.

This is because it can be proven, categorically, that God is irrelevant in questions of morality, and by extension, the way we live our lives.

Please bare in mind that this is not merely an entertaining paradox or contradiction: this is a good old fashioned QED, starting from the very first principles of deduction (which I will list in case anyone is unfamliar with them). My notes are in itallics.

Here goes:

First principles:
I doubt that I think, therefore I think, therefore I am, and I think.
I think, therefore there must be a change in my thinking-organ, therefore there must be change.
There is change, therefore there must be space-time, which is necessary for there to be a change.

The place of morality
If I have no control over this change, my actions are irrelevant - there is no need to try to regulate myself. By extension, I cannot be condemned or commended for any action I take.
If I have any control over this change, I must determine what I should do (morality).

The basis of morality:
My moral system must be able to resolve all options available to me as positive, negative or neutral (by definition).
Therefore, my moral system must give a verdict on all possible sollutions to all moral questions without ambiguity.

Are there any religions which are undebatable? Most by this requirement already, but this is not proof: the next section will do that.

In order for a moral system to provide a moral verdict on all sollutions to all situations without ambiguity, it must deal with all possible percievable worlds.
To acomplish this it is necessary to describe and comment on all these situations and options, or to provide a tool by which these solutions may be found.

To anyone who proposes divine enlightenment as this tool: the fact that there are moral questions, where there is uncertainty in the answer, disproves this theory, as divine enlightenment would have to be active in all situations. This is minor, however: if you disagree with me, please don't divert your focus from the proof. Constant divine enlightenment is also dealt with under "implications".

For any number of situations, a system must be longer than the total of those situations.
Therefore, it is necessary to use a tool to determine morality.
There seems to be no indication as to which tool we should use. However, unless rationality is used at one point, an individual may not justify themselves or reason their case (since that would use rationality). This is a legitimate moral position, but a useless one.

Inductive proof of rationality (deductive method of decision making)
If rationality is an acceptable tool in any single situation, we accept in one case the tennets of rationality, which can be represented as:

- If x must equal y
- And x is true
- Then ALLWAYS y.

Therefore, if rationality is a legitimate tool in one case, by its own definition, it is true in all cases.

The place of a God in moral questions:
Given that all moral questions my be solved rationally (proven above):

- Any God which disagrees with the rational conclusion at any point is immoral, and should not be followed.

- Any God which agrees with the rational conclusions at every point is moral, but is irrelevant, since the conclusions can be reached rationally.

QED

If you havn't given up on this post yet, here are some other important relevant implications:

- Faith is by definition immoral, since it advocates irrational behaviour but accepts rationality.
- Anyone who makes exceptions to rationality, denies rationality itself: their opinions are, by definition, irrelevant.
- Anyone who does not accept rationality should not use it to justify themselves, since it directly contradicts their chosen moral tool.
- Anyone who does not accept rationality denies that they are thinking beings. While not impossible, the degree of improbablity is so extreme and the conclusions are so useless (i.e. morality + .........), that this position can be disregarded (Unless, of course, you disregard rationality, in which case this point is irrelevant. Baah. I challenge you to argue with me without using any sort of rationality though. Technically, you are not even allowed to use "because God said so", since that uses the word "because", which demands a rational).
- While this proof establishes the necessity for rationality to detemine morality, it does not to much to detemine how. Kant is probably the most famous of those who tried, and the results are a bit creepy and not entirely sound, but rationality is at least a starting point.


... and when the Mathematics pupil presented this proof to the Theology professor, he responded by reading out of the Bible. True story, this one (actually, it was a Divinity teacher, but close enough).

What do you think? It took me a while to compile the proof, but I did try to make it absolutely perfect. See any significant holes?

Yours,

Ascalon.
 
Anyone who does not accept rationality denies that they are thinking beings.

Anyone who makes exceptions to rationality, denies rationality itself: their opinions are, by definition, irrelevant.

In this way, anyone who does not accept rationality has no relevant opinions. Is this correct? If so, it might be worth mentioning something like "If x=y and y=z then x=z" somewhere.
 
Anyone who does not accept rationality cannot participate in a rational discussion. All discusions, that I am aware of, need some basis in rationality, or they cannot take place.

In a sense, while their opinions may be relevant, even correct, they are useless.

This is not the basis for the arguement but rather an implication, if you are worried about a self-confirming argument here.

Edit: missed the second part. The trouble with "if x=y and y=z then x=z" is that if x also = -z; z might not be the result (although it will have existed); and all that needs to stand for rationality to be proven inductively is "if x = y, and x, then always y", since that makes rationality always valid if it is valid once. I don't quite see how that correlates to the above, but maybe I'm missing the point here.
 
I think that Objection's protest is probably correct. Instinct and emotion must at times take precedence over rationality, unless you argue that it is still rational if you somehow can choose and dictate precisely what situations those might be.
 
I see a few problems with this. First of all,
- If x must equal y
- And x is true
- Then ALLWAYS y.

Therefore, if rationality is a legitimate tool in one case, by its own definition, it is true in all cases.
I don't see how this is different from saying that I can beat one person in arm-wrestling, therefore I can beat all people in arm-wrestling.

I doubt that I think, therefore I think, therefore I am, and I think.
I think, therefore there must be a change in my thinking-organ, therefore there must be change.
If you've just proven that you think because you doubt that you think, then why should you doubt that you think? If you do, you probably don't think. Also, the idea that you doubt that you think is in itself a MASSIVE assumption.

If I have any control over this change, I must determine what I should do (morality).
I see no reason for this to be the case. You must determine what you will do, but by no means must you determine what you should do. I simply see no logical reason for this conclusion.


Also, you talk about rationality, logic and such. Rationality in my opinion doesn't exist. There's no action that can be justified without making many, many huge assumptions. Logic is the same. Although it of course serves some purpose in society, if you're questioning the assumptions themselves it's simply flawed every time.
 
I don't see why you can't accept things on faith sometimes. One can be trying to be rational and still be wrong. Lets say there is a math problem I could try to solve it myself (but I'll probally be wrong if its hard enough). Alternatively I could accept X-Act's answer based on my faith in him. One's own rational inquiries into morality could easily be as wrong as my attempt at a difficult calculus problem.

Also, you talk about rationality, logic and such. Rationality in my opinion doesn't exist. There's no action that can be justified without making many, many huge assumptions. Logic is the same. Although it of course serves some purpose in society, if you're questioning the assumptions themselves it's simply flawed every time.

Also I agree with this. Reason can let you derive conclusions from assumptions but it can't tell you what assumptions you should make (except by showing that some conflict). We need to see if the conclusions fit in with the world we see (if your assumptions lead to murder being good they are likely flawed).

Also I don't know what the basis of morality is. I'd say something like: To the best of your ability try to treat others as they would like to be treated.
 
I don't see why you can't accept things on faith sometimes. One can be trying to be rational and still be wrong. Lets say there is a math problem I could try to solve it myself (but I'll probally be wrong if its hard enough). Alternatively I could accept X-Act's answer based on my faith in him.
Well, clearly s/he does accept things on faith... such as the fact the s/he doubts that s/he thinks (the premise of this proof) Faith is completely necessary to function normally in society.
 
CaptKirby:

Firstly, I don't think emotions and rationality are mutually exclusive. I'm probably in a minority here, but I take great care to analyse my emotions in order to determine precisely what they are, why they are caused, whether they are legitimate, and what if anything should be done about them. I don't necessarily ignore emotions, but I consider them only a part of a rational calculation.

PurpurealSunshine:
"I see a few problems with this. First of all,

Quote:
- If x must equal y
- And x is true
- Then ALLWAYS y.

Therefore, if rationality is a legitimate tool in one case, by its own definition, it is true in all cases.
I don't see how this is different from saying that I can beat one person in arm-wrestling, therefore I can beat all people in arm-wrestling."

The conditions for rationality to exist at all even once are:

- If x must equal y
- And x is true
- Then ALWAYS y.

If you accept this once, you have accepted it for all cases. In essence, accepting that rationality is acceptable in all cases is necessary to accept it in any case, because consitancy is a prerequisite for rationality.
"
Quote:
I doubt that I think, therefore I think, therefore I am, and I think.
I think, therefore there must be a change in my thinking-organ, therefore there must be change.
If you've just proven that you think because you doubt that you think, then why should you doubt that you think? If you do, you probably don't think. Also, the idea that you doubt that you think is in itself a MASSIVE assumption."

When I ask myself whether I can be sure that I am thinking, I doubt that I think. If I doubt, I must therefore think (and I suppose it is no longer necessary for me to doubt).

"
Quote:
If I have any control over this change, I must determine what I should do (morality).
I see no reason for this to be the case. You must determine what you will do, but by no means must you determine what you should do. I simply see no logical reason for this conclusion."

I don't think I'm being clear enough about the use of the word "should". By "should" I mean what the best course of action is (whatever that may be), not whether I should care about anyone outside of myself, etc. That is where the big debates usualloy come in.

Finally the issue of rationality and logic: I believe that logic is pure and perfect. This means that if x, and x = y, then y. The trouble comes in when we cannot be sure that x is true, and we cannot be sure that x = y. If we find out for certain that y is not true, then I would assume that either x does not equal y or x is not true, not that the system of logic is flawed.

The issue of what assumtions to accept and how to construct the moral system is entirely valid. There is no flawless rationally constructed moral system that I am aware of, and this is where is see the debate coming in. However, my proof is merely to show that any atempt to construct a moral system should be based in rationality.

The Plant:

A final thing: it was rather sneaky to use the word "faith" in your reply, since we are talking about different definitions of faith. The faith I am talking about, which I condsider immoral, is the irrational acceptance of beliefs or adherence to behavioural codes; where as "faith in X-Act" or closer to the point, faith in X-Act's mathematical ability, honesty and concientiousness is probably based in rationality, in that you feel that it has been sufficiently demonstrated to accept. Unless of course you feel that X-Act is a god, and should be trusted unconditionally, but I don't think that's what you suggest.

Regards,

Ascalon

p.s. btw I'm a he :p
 
Off topic, with respect to above post:

I agree with most of you first paragraph, but I don't see how morality can be anything but objective. We can try to create better and better moral theories, in a similar manner to which science updates and corrects itself, but there is definately a right and wrong in any (usefull) moral theory.

And I really hate Occam's razor. It can be used to attack ideas that go on for so long it's practically impossible for an error not to have been made if there is a degree of vagueness in each step, but otherwise it is not a rationally appropriate tool: the correct answer is correct, no matter how long and complicated it is. The fact that numbers like pi and e are irrational is evidence of this

It's also just plain wrong more than half the time. We live in a stupifyingly complex universe, so explanations are bound to be multifaceted. The simplest is bound to be simplistic, as well as the next most simple ones. We generally try to settle for a simplification that is generally logically sound and most efficiently describes the issue, but oversimplifications that are so simplistic they are just horribly wrong really annoy me (yes, I am the type to spend half of an exam arguing that the assumptions inherent in a question are wrong, often succesfully).

Otherwise, can we return to the topic of the OP please? Proofs are important to me.

Regards,

Ascalon
 
I don't see how morality can be anything but objective. We can try to create better and better moral theories, in a similar manner to which science updates and corrects itself, but there is definately a right and wrong in any (usefull) moral theory.

Morality is necessarily relative to the masses that it governs. If people change, if society changes, which it obviously does, then morality has to change accordingly. Morality is meant to help us optimize a very complex function of the well-being of people at large (though determining what this function is is yet another can of worms), but for practical purposes it has to be simplified to the bone. It can never be exact because people need to cognize it. Furthermore, since morality is an important aspect of the lives of people, it ensues that morality is more often than not a function of itself. If the color red was immoral and that people were very upset by people wearing red because wearing red is immoral, then red would effectively be immoral because it upsets people and it would upset people... because it would be immoral. Morality is a huge, highly complex dynamical system and it would be simplistic to claim that there could be any objectivity to it.

I also don't see any proper way to resolve moral grey, which is the boundary between good and bad (or between good and neutral and neutral and bad).

And I really hate Occam's razor. It can be used to attack ideas that go on for so long it's practically impossible for an error not to have been made if there is a degree of vagueness in each step, but otherwise it is not a rationally appropriate tool: the correct answer is correct, no matter how long and complicated it is. The fact that numbers like pi and e are irrational is evidence of this

Occam's razor is essential to rational inquiry. In the eventuality that there are several answers that are compatible with evidence, it is the best tool we have to choose a winner. Look at the following sequence:

1, 2, 3, 4, 5, 6, ...

If I ask you what's the next number, you will probably answer 7. That means that you assume that the sequence has been produced by f(x) = x. However, it stands to reason that f(x) = (x-1)(x-2)(x-3)(x-4)(x-5)(x-6)+x would also produce the same sequence, except that the next number in the series would be 727 instead of 7. Why did you prefer f(x) = x? Because it's simpler. You picked the function which had a greater (bits of explanation)/(explained bits) ratio.

Occam's razor is a simplified version of the basic principle that the most likely explanation for a phenomenon is the shortest one that does the job. That doesn't mean it is the right one, but it is the one that should be preferred, often by a crushing margin. It works pretty damn well in practice.

Also note that pi and e may be irrational, they remain computable. Their actual information content (i.e. a program that computes them) is finite and could easily fit on a single line.

It's also just plain wrong more than half the time. We live in a stupifyingly complex universe, so explanations are bound to be multifaceted.

The universe is not stupefyingly complex. Most of it follows from pretty simple rules.

The simplest is bound to be simplistic, as well as the next most simple ones. We generally try to settle for a simplification that is generally logically sound and most efficiently describes the issue, but oversimplifications that are so simplistic they are just horribly wrong really annoy me (yes, I am the type to spend half of an exam arguing that the assumptions inherent in a question are wrong, often succesfully).

Occam's razor does not apply to cases where an explanation is more accurate than the other. Occam's razor is meant to cut out the fat in situations where the fat doesn't do anything. If someone managed to find a short equation for the laws of physics from which every single observation can be derived, you can be sure physicists will drop string theory in a blink. Unless tangible advantages can be derived from the concept of objective morality, it is a perfectly good target for Occam's razor.
 
Your linear concepts of space-time bore me.

In all seriousness, rationality's key weakness as a prerequisite for moral systems is that multiple conflicting actions can be entirely rational.

Terrorists for example are entirely rational. Fear can be used to get people to bend to your will, and terrorists aim to use fear to achieve political ends.

To most moral systems however, taking lives, destroying property, and instilling fear in a population you are trying to manipulate are solidly in the "morally wrong" category.

Then you have republicanism as a means to achieve political ends. You agree among a certain population to elect one person to represent your legitimate aims and give that person the requisite power to do so. At the end of the day you are still giving someone a degree of control over your life in a rational manner. Appeasing the terrorist keeps your physical life in tact in the short term, appointing the representative allows you to settle small disputes between a multitude of people with minimal risk.

Both of these strategies are rational in and of themselves, yet have obviously different end results. Results are the basis on which one should base a moral system. It is possible to be entirely irrational and reach a positive result. Selfless altruism in dangerous areas is not rational to the average human being, yet it drives the example for most major missionary work.

"By your fruits will you know them."

The moral system I prefer is the one which makes its case clearly and consistency on a broad range of issues with an underlying set of guiding principles. For those who haven't been around Smogon very long, I am speaking of Catholicism. This moral system is challenging because it often goes against your base and entirely rational inclinations and desires. It is in essence a long-term rationality for living a life of excellence rather than a life of mediocrity. God's existence or non-existence is rather irrelevant except in regards to this point: it founds the purpose for action on non-temporal rewards in service of a non-temporal agency. Moral systems based on human beings in principle are prone to the human faults of their most respected leader. God is referenced in terms impossible to apply to any human being or that any human being can fathom or fashion into a coherent form of malevolence. It is impossible to justify acts of a malevolent nature to be in service of an infinitely benevolent being, at least long term.

Objective morality is that which yields the best results. It thus requires a moral system that is both difficult to manipulate and exhaustively thorough. Major world religions fulfill these criteria where secularist states based on the greatness of one man, a government of men, or mankind itself do not. Humans are morally relative and will do what is rational for their circumstance, not what is moral.
 
- Faith is by definition immoral, since it advocates irrational behaviour but accepts rationality.

Informal induction isn't much different than faith, yet that is what people use when they expect the Sun to rise tomorrow morning. Are you saying that it's immoral to believe that the laws of physics will remain the same and that the sun will thus rise tomorrow?



This moral system is challenging because it often goes against your base and entirely rational inclinations and desires. It is in essence a long-term rationality for living a life of excellence rather than a life of mediocrity

Sounds vaguely like Epicurian hedonism, where one sacrifices pleasure in the immediate future for more pleasure in the long term. Instead of a piece of chocolate now, one gets the whole box later, possibly in small fragments over time.

Of course, your religion doesn't give a shit about pleasure, so the comparison isn't perfect, but the idea is more or less the same ("excellence rather then mediocrity.")

It is impossible to justify acts of a malevolent nature to be in service of an infinitely benevolent being, at least long term.

You've shown how Catholicism provides incentive to be moral. Now show us how it tells us how to be moral better than other systems do. Otherwise, you might have said:

It is impossible to justify acts of a dry nature to be in service of an infinitely wet being.

Objective morality is that which yields the best results. It thus requires a moral system that is both difficult to manipulate and exhaustively thorough. Major world religions fulfill these criteria where secularist states based on the greatness of one man, a government of men, or mankind itself do not.

I don't see how adding God to the bases you describe--for Catholicism is essentially a government of men, except headed by God and his vicar--makes the Catholic moral system that much harder to manipulate.
 
"Quote:
The Plant:

A final thing: it was rather sneaky to use the word "faith" in your reply, since we are talking about different definitions of faith. The faith I am talking about, which I condsider immoral, is the irrational acceptance of beliefs or adherence to behavioural codes; where as "faith in X-Act" or closer to the point, faith in X-Act's mathematical ability, honesty and concientiousness is probably based in rationality, in that you feel that it has been sufficiently demonstrated to accept. Unless of course you feel that X-Act is a god, and should be trusted unconditionally, but I don't think that's what you suggest.

I don't know many religous poeople that I respect that accept what thier religion says unconditionally. Perhaps based on my experience that living my life according to x religion has brought me good things I'm willing to accept its tennants.
 
For me, the basis of morality is simple: I don't like suffering. An action is good if it lowers suffering (with the most morally right action being the one that lowers suffering the most), bad if it increases or causes suffering, and neutral if it has no influence on suffering. I also make a reverse statement about happiness. Good actions are those that increase happiness, bad are those that decrease it, and neutral are those that have no effect. The basis is simple.

It is the details of morality that are tricky, and this is why I love The Case of the Speluncean Explorers. If several people are trapped beneath a mountain due to a rock slide, under what conditions (if any) is it morally right to kill one so that the others may survive by eating him?

Occam's Razor doesn't say "The simplest explanation is correct, always.". The actual point it's making is that once you come up with several theories that explain reality equally well, you should accept the simplest one. You can create complex theories of epicycles to explain that the sun revolves around the earth, but simple gravity explains things better having the earth revolve around the sun.
 
Morality is necessarily relative to the masses that it governs. If people change, if society changes, which it obviously does, then morality has to change accordingly. Morality is meant to help us optimize a very complex function of the well-being of people at large (though determining what this function is is yet another can of worms), but for practical purposes it has to be simplified to the bone. It can never be exact because people need to cognize it. Furthermore, since morality is an important aspect of the lives of people, it ensues that morality is more often than not a function of itself. If the color red was immoral and that people were very upset by people wearing red because wearing red is immoral, then red would effectively be immoral because it upsets people and it would upset people... because it would be immoral. Morality is a huge, highly complex dynamical system and it would be simplistic to claim that there could be any objectivity to it.

The dictates and output of the moral system may change (which is not necessarily relativism) but the moral system itself, and the foundations it is build on do not. If you build your morality upon the Golden rule and rationality (preferably in a manner less exteme than Kant), using a certain linguistic expression my be perfectly acceptable at one period in time, but offensive and therefore immoral in another, because of the connotations that espression has picked up with the progression of society.

However, I think we differ when it comes to your example with the colour red. My moral system is indifferent to whether people's feelings are hurt, because it is not based on the shortcut of humanism. I would argue that red if red upset people without it symbolising extreme disrespect or hatred towards anyone or anything in particular, that would be the affair of the observer, irrelevant to my choice. I analyse my own feelings and reactions and feel that anyone who does not do the same will likely agitate themselves irrationally.

I also don't see any proper way to resolve moral grey, which is the boundary between good and bad (or between good and neutral and neutral and bad).

My "moral output system" will give a result of "immoral", "moral", or "indifferent" which happens whenever there is insufficient information to make a condemnation, or where the will is not compromised (immoral) or its development encouraged (moral). If these neutral areas are "grey areas", then my attitude would be "do so if you please".

Occam's razor is essential to rational inquiry. In the eventuality that there are several answers that are compatible with evidence, it is the best tool we have to choose a winner. Look at the following sequence:

1, 2, 3, 4, 5, 6, ...

If I ask you what's the next number, you will probably answer 7. That means that you assume that the sequence has been produced by f(x) = x. However, it stands to reason that f(x) = (x-1)(x-2)(x-3)(x-4)(x-5)(x-6)+x would also produce the same sequence, except that the next number in the series would be 727 instead of 7. Why did you prefer f(x) = x? Because it's simpler. You picked the function which had a greater (bits of explanation)/(explained bits) ratio.

Occam's razor is a simplified version of the basic principle that the most likely explanation for a phenomenon is the shortest one that does the job. That doesn't mean it is the right one, but it is the one that should be preferred, often by a crushing margin. It works pretty damn well in practice.

Also note that pi and e may be irrational, they remain computable. Their actual information content (i.e. a program that computes them) is finite and could easily fit on a single line.

The universe is not stupefyingly complex. Most of it follows from pretty simple rules.

The more I learn about physics the more I disagree. Can you explain the reason for the relationship between electric and magnetic forces? Or why the stong nuclear force exists? And what the hell is light anyway? And as I am sure you are aware, Newton's laws (the "simple" rules) only work in non-extreme scenarios, and even then, more than one model is needed.

Occam's razor does not apply to cases where an explanation is more accurate than the other. Occam's razor is meant to cut out the fat in situations where the fat doesn't do anything. If someone managed to find a short equation for the laws of physics from which every single observation can be derived, you can be sure physicists will drop string theory in a blink. Unless tangible advantages can be derived from the concept of objective morality, it is a perfectly good target for Occam's razor.

My objection is the the abuse of Occam's razor. For instance, I am aware that pi and e and root2 can be expressed, and even drawn without too much dificulty, but someone abusing Occam's razor my demand that I find a decimal answer. Cutting out the useless fat, as you put it, is fine, but not to remove something that is may be relevant, simply because it has not yet proved so.

To illustrate, if I modelled (somehow) the stock market, with varous imputs, including the price of oil, currency exchange rates, etc. , with rationals for the presence of each one. Someone abusing the razor may say that I should remove the imput of the oil price, because it would not alter the way my model fits past trends. However, the oil price would have been included with a strong rational, and removing it to simplify an issue would make the model wrong, even if its output was consitant with historical trends.

In essence, if a component is rational and valid, it is not a legitimate target of Occam's. The system of objective rational morality, based on a consitantly valid tool, which can identify moral and immoral actions (although the output may be different depending on your inputs), has far less problems and inherant contraditions than relativism, which may state, in an extreme and bizzare example, that cleaning my teeth is immoral if my society deems it so for a non-rationally valid reason (e.g. God spoke to my anscestors in a vision and told me it was evil.)

Thanks for your post, I especially liked the function, I indend to use it on my tutees :p

Your linear concepts of space-time bore me.

In all seriousness, rationality's key weakness as a prerequisite for moral systems is that multiple conflicting actions can be entirely rational.

Terrorists for example are entirely rational. Fear can be used to get people to bend to your will, and terrorists aim to use fear to achieve political ends.

But what objective are they being rational towards? If their political ends are the ultimate good, then I would argue that they are acting morally. However, I would oppose anyone who suggests that the type of terrorists we have at the moment are working towards the ultimate good.

They may well be a very fine example of rationality done badly. Their starting assumption may be something like "my religion is supreme" or "my political worldview is the ultimate good", in which case their actions may be entirely rational. This is the fundemental problem with Theocracies: if a religion is supreme, no matter how much you try to argue logically, you go in the wrong direction.

The problem is averted by using more rationality in the choice of starting assumptions. The supremacy of the will (having one, not exercising it), freedom to pursue your own lifestlye, materialism, hedonism, the Golden Rule, and non-violence, are all superior starting points for a rational moral system.

To most moral systems however, taking lives, destroying property, and instilling fear in a population you are trying to manipulate are solidly in the "morally wrong" category.

Then you have republicanism as a means to achieve political ends. You agree among a certain population to elect one person to represent your legitimate aims and give that person the requisite power to do so. At the end of the day you are still giving someone a degree of control over your life in a rational manner. Appeasing the terrorist keeps your physical life in tact in the short term, appointing the representative allows you to settle small disputes between a multitude of people with minimal risk.

Both of these strategies are rational in and of themselves, yet have obviously different end results. Results are the basis on which one should base a moral system. It is possible to be entirely irrational and reach a positive result. Selfless altruism in dangerous areas is not rational to the average human being, yet it drives the example for most major missionary work.

"By your fruits will you know them."

I vehemently oppose this, primarily because you cannot know what the results will be. If I cook a meal using ginger, and someone who eats it dies of a rare ginger allergy which they themselves where not aware of, was my action immoral? If so, why should be even try to be moral at all, since we cannot fundimentally determine whether we are moral or not?

The selfless altruism is different I think. Presumably missionarys have a moral system which tells them that helping others is the ultimate good, which makes selfless altruism entirely rational. Rational does not need to mean cold hearted, self interested or even normal.

The moral system I prefer is the one which makes its case clearly and consistency on a broad range of issues with an underlying set of guiding principles. For those who haven't been around Smogon very long, I am speaking of Catholicism. This moral system is challenging because it often goes against your base and entirely rational inclinations and desires. It is in essence a long-term rationality for living a life of excellence rather than a life of mediocrity. God's existence or non-existence is rather irrelevant except in regards to this point: it founds the purpose for action on non-temporal rewards in service of a non-temporal agency. Moral systems based on human beings in principle are prone to the human faults of their most respected leader. God is referenced in terms impossible to apply to any human being or that any human being can fathom or fashion into a coherent form of malevolence. It is impossible to justify acts of a malevolent nature to be in service of an infinitely benevolent being, at least long term.

I had to use different colours here, as I found a number of different issues here.

Firstly, my understanding of Catholicism, having been raised a Catholic (although admittedly I didn't pay that much attention, so please correct be if i get something wrong), is that:
1. We commit sins against God, through our own fault.
2. God cannot accept this sin, being diametrically opposed to it.
3. We may ask Jesus for forgiveness, who is able to forgive us if we commit to obeying His (perfect) commandments via His crucifiction.

We assume therefore:
Good things are good because God commanded them. This makes the term omnibenevolent somewhat meaningless, and I personally cannot accept this: rape for fun will always be morally unacceptable, irrespective of any divine commandments. Furthermore, Catholicism does not provide all the answers (that I am aware of). Despite having what I feel relatively confident in calling the most comprehesive moral doctrines of any branch of Christianity, it simply cannot cater to all situations without the use of rational deduction, at which point the inductive proof of rationality and its place as the basis of morality (since even assuming God's commandment's rightousness in one instance does not prove His existance or His rightousness).

When you state that the question of God's existance is irrelevant to the moral issue, we are at an equal place. However, then imply that following the commandments of the God is the ingredient for living a life of excellence. There is a huge rational gap here: why these commandments? why not others? why use commandments at all?

Now I'm completely lost. You appear to have redefined Catholicism as long-term self interest. While self interest may be a valid output of a rational moral system, how can the existance of God be irrelevant? You either get rewarded, or you don't.

I do not propose moral systems based on people. I propose moral systems based on rationality, constructed from rational ideas. I mentioned hedonism, the Golden Rule, and enlightenment earlier with a number of other examples, which should yeild a moral output to every question infallably.

I feel like I'm repeating myself a little, so I'll keep this brief: If the commandments of a God are good by definition, then rape, genocide, torture, etc. are all reasonable acts if God commands them. I find this unpalatable and therefore reject good as defined by what a God commands, but if you accept this I suppose it is consitant and rational. Most people are not prepared to obey this kind of God though.

Objective morality is that which yields the best results. It thus requires a moral system that is both difficult to manipulate and exhaustively thorough. Major world religions fulfill these criteria where secularist states based on the greatness of one man, a government of men, or mankind itself do not. Humans are morally relative and will do what is rational for their circumstance, not what is moral.

This is not so much an issue of the validity of religion as a personal moral choice but a justification of religious systems as a global moral guideline. I do believe that systems such as the supremacy and sanctity of the will or the Golden rule can function better than these religions, since if a religion is said to be the ultimate good, than the more strictly it is followed, the more goodness there is, which leads to all manner of problems. I still object to the consequentialism here: I believe good to be seperate from the results it accidentally causes (although not necessarily from the results it intentionally causes).

In any case, that was a highly entertaining read. I hope my reply wasn't too boring.

I don't know many religous poeople that I respect that accept what thier religion says unconditionally. Perhaps based on my experience that living my life according to x religion has brought me good things I'm willing to accept its tennants.

If your religion has brought you suffient positives to justify following it, then your true morality is that of rational self intest, since that is your justification for your actions. Nothing wrong with that, but it does not defend the use of religions to justify moral positions.

Many thanks for the replies,

Ascalon.
 
The basis of morality is clearly subjective. But I believe humans in general may have gotten their ideas about morality from people identifying other people's actions that hurt them individually, and applying more general principals based on that, not necessarily consciously. And since these actions are fairly universal, there is a common morality that human societies in general seem to share. But this is clearly an invented morality, not one that is inherent.
 
Heh, I also loved The Case of the Speluncean Explorers. It does go beyond "simple" morality though - it dives into the place and rigidity of the law, and the role of the state, as well as where immorality should be punished.

In my opinion, it would only have been a legitimate moral action to kill the man if he agreed to it right up to the moment he died (difficult to engineer perhaps, but not impossible). However, I would have found the men innocent, since their motive was immediate self preservation, and not many can be expected to be able to make the moral choice of allowing everyone to starve to avoid killing a single man, and the verdict would not have lessened the deterrant of the justice system in any way.

As you can probably tell, I dislike utilitarianism. It functions on the assumptions that people are equal and that total global happiness is the ultimate good, which doesn't sit well with me, being a tortured soul and all that. I wonder how you react to Huxley's Brave New Wold though. Would that sort of society, with a far higher "total global happiness" quotient be superior to our own. If not, how would you justify the discrepancy?

Yours interestedly (is that even a word?),

Ascalon.
 
"The more I learn about physics the more I disagree. Can you explain the reason for the relationship between electric and magnetic forces? Or why the stong nuclear force exists? And what the hell is light anyway? And as I am sure you are aware, Newton's laws (the "simple" rules) only work in non-extreme scenarios, and even then, more than one model is needed. "

occam's razor doesn't mean that the solution to something is likely to be simple, in fact it isn't, however because there are exponentially more complex solutions than simple ones a simple one that explains as well as a complex one is more likely
 
As you can probably tell, I dislike utilitarianism. It functions on the assumptions that people are equal and that total global happiness is the ultimate good, which doesn't sit well with me, being a tortured soul and all that. I wonder how you react to Huxley's Brave New Wold though. Would that sort of society, with a far higher "total global happiness" quotient be superior to our own. If not, how would you justify the discrepancy?

Yours interestedly (is that even a word?),

Ascalon.

My only problem with Brave New World is that the government and society appear to force those behaviors. My ultimate moral goal is maximizing happiness, but my ultimate legal goal is maximizing freedom (because people know what makes themselves happy more than I do).
 
For me, the basis of morality is simple: I don't like suffering. An action is good if it lowers suffering (with the most morally right action being the one that lowers suffering the most), bad if it increases or causes suffering, and neutral if it has no influence on suffering. I also make a reverse statement about happiness. Good actions are those that increase happiness, bad are those that decrease it, and neutral are those that have no effect. The basis is simple.

Whose suffering? What about actions that increase suffering in the short term but reduce suffering in the long term? What about actions that increase suffering at one place and lowers it at another? How do you measure suffering anyway? Also, the statement using suffering is not, strictly speaking, equivalent to the.statement using happiness.

Ascalon said:
If you build your morality upon the Golden rule and rationality

Building your morality upon the Golden rule implicitly assumes that every agent functions the same, which is only true to some extent and is something that may change in the future. It is thus a wholly insufficient basis. Furthermore, the Golden rule is only rational in so far that others can do to you what you can do to them, and effectively few people apply it in the way they treat most animals (except perhaps pets). It is non-trivial to determine the set of entities who are subject to these rules and considerations and it is not clear at all that there is any objective criterion for it other than the extent of our empathy (which is subject to change).

My moral system is indifferent to whether people's feelings are hurt, because it is not based on the shortcut of humanism. I would argue that red if red upset people without it symbolising extreme disrespect or hatred towards anyone or anything in particular, that would be the affair of the observer, irrelevant to my choice. I analyse my own feelings and reactions and feel that anyone who does not do the same will likely agitate themselves irrationally.

Ah, but my moral system is different. In my moral system, many people are naturally irrational (lost causes, basically) and it is the responsibility of rational people to accomodate them and manipulate them as much as it is reasonable to do so. They can't figure out what's good, so you have to do it for them. My moral system is quite pragmatic you see.

My "moral output system" will give a result of "immoral", "moral", or "indifferent" which happens whenever there is insufficient information to make a condemnation, or where the will is not compromised (immoral) or its development encouraged (moral). If these neutral areas are "grey areas", then my attitude would be "do so if you please".

That is not what I mean. The grey areas are not what's neutral, the grey areas are the space around the boundaries. Your "moral output system" only outputs three values. If you consider some action on one or several ordered axis, for example, "killing X people to save Y people", you might see that some values of X and Y will be moral, some neutral and some immoral. At some point there will be a value of X such that a very, very little change will cause the moral output to switch. More often than not, if you are in a situation that's very near the boundary you won't really have a clue whether it's moral, immoral or neutral to do something. Most moral dilemmas lie in that moral grey, on the fine line between morality and immorality. Adding neutrality doesn't solve anything, it just shifts the problem to the fine line between morality and neutrality.

The more I learn about physics the more I disagree. Can you explain the reason for the relationship between electric and magnetic forces? Or why the stong nuclear force exists? And what the hell is light anyway?

These questions are a waste of time and they are nonsensical. "Why" can only be evaluated from a subjective point of view, hence asking "why the strong nuclear force exists" is basically implying the existence of God because it's the only context where it would make sense to ask such a question. There is no "why". There is no reason. Things are the way they are and that's it. Try to find an answer to any of these questions, regardless of whether it is the right answer or not, and you'll probably realize that you can't, that there simply isn't any way you could possibly answer them, and that even if you could you'd just have yet another "why" to find an answer to.

And as I am sure you are aware, Newton's laws (the "simple" rules) only work in non-extreme scenarios, and even then, more than one model is needed.

Given the size of the universe, if its rulebook can fit in a pocket, I consider that it is simple. To me simplicity is irrelevant to a theory's intuitiveness or how easy it is to understand it. Furthermore, extremely simple rules can lead to extraordinarily complex phenomenon. It is very possible that in the future we'll find simpler rules than what we have now from which all the others can be derived.

To illustrate, if I modelled (somehow) the stock market, with varous imputs, including the price of oil, currency exchange rates, etc. , with rationals for the presence of each one. Someone abusing the razor may say that I should remove the imput of the oil price, because it would not alter the way my model fits past trends. However, the oil price would have been included with a strong rational, and removing it to simplify an issue would make the model wrong, even if its output was consitant with historical trends.

If your model fits past trends without including the price of oil, you should re-evaluate your rationale for including it, because it is most likely wrong. It is dangerous to trust "rational" arguments when they go directly against evidence. Rational arguments are worthless (and probably wrong) if they don't lead to results. Now, maybe you have evidence that the price of oil will have an effect in the future, or maybe you have evidence that the price of oil *can* have a huge influence and you don't want to risk fucking up your predictions by ignoring it. That's all fine. But if you include the price of oil on some good rational argument but it turns out that it does nothing in a case where you would have expected an effect, you really should re-evaluate your argument.

In essence, if a component is rational and valid, it is not a legitimate target of Occam's.

I would rather say that Occam's is a good test to determine whether your component is actually rational and valid. If your component doesn't do anything and there is no evidence that it ever will, there's something wrong with your reasons.

The system of objective rational morality, based on a consitantly valid tool, which can identify moral and immoral actions (although the output may be different depending on your inputs)

What tool? Give me a mathematical function corresponding to objective rational morality. I want numbers. Cold, hard fact. No vague terms like "happiness", I want a clear quantification of happiness, one that I could calculate. If you cannot give me a function directly, give me a procedure which will, at its limit, yield the function. Until you can give me objective rational morality, I will have to deny both its existence (because it would lack a proper definition) and its usefulness (morality is only useful if it is known).

has far less problems and inherant contraditions than relativism, which may state, in an extreme and bizzare example, that cleaning my teeth is immoral if my society deems it so for a non-rationally valid reason (e.g. God spoke to my anscestors in a vision and told me it was evil.)

That's amusing because if we lived in a society where cleaning one's teeth was immoral, we'd be here on the forums arguing the same things. Except that in order to ridicule moral relativism, you might be giving me an example with a hypothetical world where cleaning one's teeth is not immoral. You don't know what you'd be thinking in that world. Whether you like it or not, right now, you have a moral system for yourself. Especially in so far that your ideas are widespread, that is bound to taint your perception. Right now you will vehemently assert that every human has the same rights. Fast forward in 1,000 years where humanity is split in two castes based on intelligence and having different rights, you will vehemently assert that rights are a function of intelligence. To me it's simple: morality is "whatever works". Simple. Pragmatic. And ultimately, an accurate picture of morality throughout the ages, whether you like it or not.

In the end, whether there is an absolute morality or not is irrelevant. We have never used it, we are not using it and it is not clear that we ever will. Relative morality is simple, easy, pragmatic and it appears naturally in all societies.

As you can probably tell, I dislike utilitarianism. It functions on the assumptions that people are equal and that total global happiness is the ultimate good, which doesn't sit well with me, being a tortured soul and all that. I wonder how you react to Huxley's Brave New Wold though. Would that sort of society, with a far higher "total global happiness" quotient be superior to our own. If not, how would you justify the discrepancy?

From the perspective of that society, it would be superior. They would not miss features of our society that they have never known, in the same way that we cannot directly appreciate the features that this future society would have. I would say that there isn't really any way to compare them. One could very well be really against something and change their mind once it's done. I suspect that such is the case with the Brave New World society.

Here's another thought experiment to ponder (one of my favorites):

Imagine that there are six billion "people" on the Earth. One thousand of them are normal humans like you and me. All the rest are puppets obeying to a single hive mind whose purpose is to make the one thousand "real" humans happy. These puppets cannot be told apart from real humans. All real humans are far apart and their interactions are prevented or tightly controlled. In that world, morality is irrelevant to real humans: they can do whatever the fuck they want and the hive mind can easily absorb it. Want to be a serial killer? Kill as many puppets as you want, the hive mind will put cops on the case that are too incompetent to catch you. Then, the only issue of "morality" is about how the hive mind would manipulate the environment.

So I say: imagine the perfect circumstances that you could have been born in. The best friends you could ever have had. Imagine the right amount of regrettable things you think you would need to have done in order to avoid regretting never having had regrets. Well, the hive mind could do exactly that. It could make you good looking, but not a supermodel, smart but not a genius. It could place you in the hands of loving parents. It could let you make mistakes but always place people around you to help you recover from them and learn from them. It could make sure that you have the right tools to succeed at what you want to do. If you're a jerk, it would let you get away with it, because you'd just be a jerk to puppets. And you would never suspect that it's all fake. OR perhaps the hive mind could scatter hints in such a way that you can figure out the scam and when you do you feel proud of yourself. It could ease you into accepting it. The possibilities are endless.

What I am describing is all a big lie. It also does not seem to have any real utility and it does not seem likely at all that it would ever happen. Still, I believe that it would be the perfect world for a human to live in.
 
Here's another thought experiment to ponder (one of my favorites):

Imagine that there are six billion "people" on the Earth. One thousand of them are normal humans like you and me. All the rest are puppets obeying to a single hive mind whose purpose is to make the one thousand "real" humans happy. These puppets cannot be told apart from real humans. All real humans are far apart and their interactions are prevented or tightly controlled. In that world, morality is irrelevant to real humans: they can do whatever the fuck they want and the hive mind can easily absorb it. Want to be a serial killer? Kill as many puppets as you want, the hive mind will put cops on the case that are too incompetent to catch you. Then, the only issue of "morality" is about how the hive mind would manipulate the environment.

So I say: imagine the perfect circumstances that you could have been born in. The best friends you could ever have had. Imagine the right amount of regrettable things you think you would need to have done in order to avoid regretting never having had regrets. Well, the hive mind could do exactly that. It could make you good looking, but not a supermodel, smart but not a genius. It could place you in the hands of loving parents. It could let you make mistakes but always place people around you to help you recover from them and learn from them. It could make sure that you have the right tools to succeed at what you want to do. If you're a jerk, it would let you get away with it, because you'd just be a jerk to puppets. And you would never suspect that it's all fake. OR perhaps the hive mind could scatter hints in such a way that you can figure out the scam and when you do you feel proud of yourself. It could ease you into accepting it. The possibilities are endless.

What I am describing is all a big lie. It also does not seem to have any real utility and it does not seem likely at all that it would ever happen. Still, I believe that it would be the perfect world for a human to live in.

Interesting thought experiment.

The problem is that you have a trade-off between the limitation of human interaction with puppets and relative freedom of each human. Given the distribution of wealth across societies, the humans actually have a relatively low area over which they can be distributed if the hive mind wants to maximize happiness. Thus, it is extremely difficult to accommodate many actions because most will risk affecting another human. Therefore, a better scenario might be to say that there is only one human, born after the previous one dies, around whom the whole world revolves.

Second of all, while the hive mind wants to make people happy, it also has the means to determine exactly what kind of happiness this is. It could place the children in environments where they grow up appreciating an arbitrary set of values that are relatively easy to accommodate; at the same time, this would preclude the existence of jerks, etc. except where they come into contact with only puppets for the sake of maximizing happiness of the true humans.

Perhaps the best scenario for the hive mind would be to make the humans the children of extremely mentally ill parents who keep them locked up in their rooms and never reveal them to the public but take good care of them nonetheless so that they love their parents and never aspire to be anything more than loved by them, and then when the parent is on the cusp of dying destroy the house in a fire while the child is asleep and suffocates to death.


My only problem with Brave New World is that the government and society appear to force those behaviors. My ultimate moral goal is maximizing happiness, but my ultimate legal goal is maximizing freedom (because people know what makes themselves happy more than I do).

Or the government could program people to be happy...and in that way it would know how to make them happy - and then deviants are also clearly identified - you can't deny that in society there are inevitably mental deviants.
 
Building your morality upon the Golden rule implicitly assumes that every agent functions the same, which is only true to some extent and is something that may change in the future. It is thus a wholly insufficient basis. Furthermore, the Golden rule is only rational in so far that others can do to you what you can do to them, and effectively few people apply it in the way they treat most animals (except perhaps pets). It is non-trivial to determine the set of entities who are subject to these rules and considerations and it is not clear at all that there is any objective criterion for it other than the extent of our empathy (which is subject to change).

Well, if I was to use what is probably the most thoroughly examined suggestion, I would derive a series of rules as to what I, as a rational being, which can prove that I am, ignoring all irrational influences, would not want to happen to me. Using this model, Kant made the following deductions:
- Lying is wrong, as a rational being would not want to be decieved.
- Killing is wrong, as a rational being would not want to stop being.
- Rape/assault is wrong, as a rational being would not want to be damaged.
etc.

While I do not agree with this system, it can be consistant and absolute. Furthermore, we have no evidence that animals are rational in the same way as humans.

Ah, but my moral system is different. In my moral system, many people are naturally irrational (lost causes, basically) and it is the responsibility of rational people to accomodate them and manipulate them as much as it is reasonable to do so. They can't figure out what's good, so you have to do it for them. My moral system is quite pragmatic you see.

My moral system is also quite pragmatic. It is indifferant to the self-inflicted plight of the cattle, and finds accomodating them an unreasonable imposition. I admit, I have been called the future apocalyptic world dictator on occasion (although my rather unusual political theory my also be responsible).

That is not what I mean. The grey areas are not what's neutral, the grey areas are the space around the boundaries. Your "moral output system" only outputs three values. If you consider some action on one or several ordered axis, for example, "killing X people to save Y people", you might see that some values of X and Y will be moral, some neutral and some immoral. At some point there will be a value of X such that a very, very little change will cause the moral output to switch. More often than not, if you are in a situation that's very near the boundary you won't really have a clue whether it's moral, immoral or neutral to do something. Most moral dilemmas lie in that moral grey, on the fine line between morality and immorality. Adding neutrality doesn't solve anything, it just shifts the problem to the fine line between morality and neutrality.

A morally neutral outcome would be the result of one of two things:
- The action would not affect the derived, valid principles.
- There is insufficient information to reasonably conclude the way in which the principles would be affected.

Consider this hypothetical example: The Evil Empire captures you and your comrade. You both know vital information which would destroy the hopes of humanity for ever if it was found out, you know for certain that the Evil Empire will use torture to try to obtain this information, and that he will not be able to resist. You must kill him.

Now, we alter the situation slightly so that all of the above is uncertain. The Resistance may possibly survive with the information betrayed, they Evil Empire may not resort to torture, and your comrade might not break down. You don't know the relevant probablities, and you are unable to rationalise strongly one way or the other. Killing him would be a neutral moral act.

[Snip]

If your model fits past trends without including the price of oil, you should re-evaluate your rationale for including it, because it is most likely wrong. It is dangerous to trust "rational" arguments when they go directly against evidence. Rational arguments are worthless (and probably wrong) if they don't lead to results. Now, maybe you have evidence that the price of oil will have an effect in the future, or maybe you have evidence that the price of oil *can* have a huge influence and you don't want to risk fucking up your predictions by ignoring it. That's all fine. But if you include the price of oil on some good rational argument but it turns out that it does nothing in a case where you would have expected an effect, you really should re-evaluate your argument.

We make our models and rational arguements without foreknoledge of the future. My hypothetical model predicts all past results, as does the model without the oil price. I would still trust the model with the oil price if the inclusion of the oil price was rational, despite the lack of evidence that the inclusion is necessary (and obviously in the absence of evidence that it is not). If later results show that the inclusion of the oil price is wrong, I would naturally re-evaluate the rational behind its inclusion.

I would rather say that Occam's is a good test to determine whether your component is actually rational and valid. If your component doesn't do anything and there is no evidence that it ever will, there's something wrong with your reasons.

If thus far a component has not influenced anything, but does predict a difference in the future, I would still include it if I was satified with the rational. Perhaps in the solving of quadratics, dividing by the variable has not as yet removed any of a certain student's answers. However, he should still avoid this and solve it properly, as there are rationals suggesting that he will loose some of his answers (assuming that there is no way for him to see any examples of this until he must solve one of course).

What tool? Give me a mathematical function corresponding to objective rational morality. I want numbers. Cold, hard fact. No vague terms like "happiness", I want a clear quantification of happiness, one that I could calculate. If you cannot give me a function directly, give me a procedure which will, at its limit, yield the function. Until you can give me objective rational morality, I will have to deny both its existence (because it would lack a proper definition) and its usefulness (morality is only useful if it is known).

That's amusing because if we lived in a society where cleaning one's teeth was immoral, we'd be here on the forums arguing the same things. Except that in order to ridicule moral relativism, you might be giving me an example with a hypothetical world where cleaning one's teeth is not immoral. You don't know what you'd be thinking in that world. Whether you like it or not, right now, you have a moral system for yourself. Especially in so far that your ideas are widespread, that is bound to taint your perception. Right now you will vehemently assert that every human has the same rights. Fast forward in 1,000 years where humanity is split in two castes based on intelligence and having different rights, you will vehemently assert that rights are a function of intelligence. To me it's simple: morality is "whatever works". Simple. Pragmatic. And ultimately, an accurate picture of morality throughout the ages, whether you like it or not.

Actually, I don't consider all people equal. I do believe that their worth may be difficult to evaluate, and that a state must avoid making these judgements, but the idea of all people being equal is actually quite absurd given some serious thought. Is a rapist, who acknoledges the evil of his deeds but doesn't care, equal to, say, someone who sacrifices themselves for a cause they believe in?

And I don't think that my experiences necessarily determine conclusions. Experiences only help to test our rules against our moral tools: once, when trying to rationalise one of my beliefs, I realised that my proceadure was not rationally responsible, and tried to evaluate it from first principles, actually coming to a different conclusion and changing my belief. It is possilbe to defy societal influence and decide morality for yourself. Societal morals do not only change because of changes in the (technological or political) situation, but also because of the ideas of visionaries who could look beyond the accepted ideologies.

Also, we tend to ignore the consequences of the world system we have at the moment. More intellgent/able/experienced/proven individuals usually get better positions and better pay etc. Is this an evaluation of their worth? Shouldn't their worth determine these things? In that case, shouldn't the way people be treated by the law be influenced by that as well? Here, I see a case for "whatever works". However, I don't see this as morality: this is how society functions. Morality is personal not about how societies should operate, but how individuals should behave. We can try to make our societies and states as moral as possible, but that is not a reflection on individual moralities. Slightly off topic, but this is why I believe states should interfere with the thought processes of people as little as possible and leave room for personal moral choices.

In the end, whether there is an absolute morality or not is irrelevant. We have never used it, we are not using it and it is not clear that we ever will. Relative morality is simple, easy, pragmatic and it appears naturally in all societies.

I disagree here: objectivism is very common - it is the basis of most major religions, and one of the bases of the justice system (personally I find "maintaining a functional society" a more valid one though). Also, and this really is pontification now, as I don't know the facts, I think that much of the decried nihilims that the youth is supposed to be experiencing in many parts of the world is a direct result of the attitudes "everyone is equal" and "morality is relative". Essentially, that means we can do as we please, and nothing we do will have the slightest worth.

Furthermore, it can be disasterous. If a society accepts that their religion trumps science, you can have this sort of thing going on: (urg, lost the link, I'll try to find it and edit it in.)

Perhaps more entertaining than disasterous, but if this kind of naive trust results in famine, than the acceptance of "whatever that society thinks is best", and the disregard of any rational person who suggests something more scientific, relativism may cause a tragedy.


From the perspective of that society, it would be superior. They would not miss features of our society that they have never known, in the same way that we cannot directly appreciate the features that this future society would have. I would say that there isn't really any way to compare them. One could very well be really against something and change their mind once it's done. I suspect that such is the case with the Brave New World society.

Here's another thought experiment to ponder (one of my favorites):

Imagine that there are six billion "people" on the Earth. One thousand of them are normal humans like you and me. All the rest are puppets obeying to a single hive mind whose purpose is to make the one thousand "real" humans happy. These puppets cannot be told apart from real humans. All real humans are far apart and their interactions are prevented or tightly controlled. In that world, morality is irrelevant to real humans: they can do whatever the fuck they want and the hive mind can easily absorb it. Want to be a serial killer? Kill as many puppets as you want, the hive mind will put cops on the case that are too incompetent to catch you. Then, the only issue of "morality" is about how the hive mind would manipulate the environment.

So I say: imagine the perfect circumstances that you could have been born in. The best friends you could ever have had. Imagine the right amount of regrettable things you think you would need to have done in order to avoid regretting never having had regrets. Well, the hive mind could do exactly that. It could make you good looking, but not a supermodel, smart but not a genius. It could place you in the hands of loving parents. It could let you make mistakes but always place people around you to help you recover from them and learn from them. It could make sure that you have the right tools to succeed at what you want to do. If you're a jerk, it would let you get away with it, because you'd just be a jerk to puppets. And you would never suspect that it's all fake. OR perhaps the hive mind could scatter hints in such a way that you can figure out the scam and when you do you feel proud of yourself. It could ease you into accepting it. The possibilities are endless.

What I am describing is all a big lie. It also does not seem to have any real utility and it does not seem likely at all that it would ever happen. Still, I believe that it would be the perfect world for a human to live in.

I'm not really intested in happiness though. To quote the work:

"But I like the inconveniences."
"We don't.", said the Controller, "We prefer to do things comfortably."
"But I don't want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin."
"In fact," said Mustapha Mond, "you're claiming the right to be unhappy."
"All right, then," said the Savage defiantly, "I'm claiming the right to be unhappy."
"Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to live in constant apprehension of what may happen tomorrow; the right to catch typohid; the right to be tortured by unspeakable pains of every kind."
There was a long silence.
"I claim them all." said the Savage at last.
Mustapha Mond shrugged his shoulders. "You're welcome," he said.

I think I'm with him. Perhaps not the syphilis, cancer, typhoid, or impotence, but otherwise I am frimly with the Savage.

Truth and Tragody, Morality and Growth, Pain and Beauty are all more relevant to me than happiness. The hive mind is not enough for me: I need Truth, and Tragody, and the Tragodies of those around me. Maybe thats just me though, and this is diverging somewhat from the discussion of morality, but this is interesting as well.

Regards,

Ascalon
 
Well, if I was to use what is probably the most thoroughly examined suggestion, I would derive a series of rules as to what I, as a rational being, which can prove that I am, ignoring all irrational influences, would not want to happen to me. Using this model, Kant made the following deductions:
- Lying is wrong, as a rational being would not want to be decieved.
- Killing is wrong, as a rational being would not want to stop being.
- Rape/assault is wrong, as a rational being would not want to be damaged.
etc.

While I do not agree with this system, it can be consistant and absolute.

Different rational beings can like and dislike different things. For example, I typically do not care if I am lied to or not, because it would imply that I "trust" people. On the contrary, I believe that facts should be acquired only in ways that guarantee accurate knowledge and thus that others should be interacted with in such a way that the truth value of what they say matters as little as possible. Thus, ideally, every fact should be distrusted unless it is confirmed by at least two independent rational agents, confirmed by an agent subject to a lie detector, or very simply confirmed by a tool which can be rationally understood to be trustworthy. In a system which minimizes trust (as should any truly robust system), lying is hardly immoral and liars are nothing more than slightly defective.

Furthermore, it is perfectly possible to imagine a rational agent who can only be happy when he or she sees other rational agents suffer. A society of such agents would be impossible to sustain and no meaningful morality would be derived for them. What you mean by "rational agent" is hardly a well-defined concept. At the core, a rational agent is an agent that can act in such a way that it maximizes its own happiness, regardless of what makes him or her happy. An irrational agent would be one that acts incompetently with respect to his or her own happiness (and thus hurts himself). A society of rational agents would thus only function properly if the happiness of each was reconcilable to some extent - but even if that was the case, there is no reason to assume that they are all the same. Kant's mistake here is to assume that there exists some canonical rational agent, which is a thoroughly unsubstantiated assertion - an unwarranted generalization of his own rationality.

Secondly, basing your moral system on what rational agents would want can only make sense if society was composed of rational agents. Human society is not composed of rational agents. Far from it. Therefore, it makes no sense for morality to have anything to do with what rational agents would or wouldn't like.

Furthermore, we have no evidence that animals are rational in the same way as humans.

On the other hand, we have solid evidence that some humans are not rational in the same way as other humans. Many humans are not rational, neither in actuality nor even in possibility. I am fairly sure that the most rational chimpanzee on the planet is more rational than the least rational human on the planet, notwithstanding mental illness. And then I've got to wonder what your criterion is to determine whether an entity is rational or not and what rationality has to do with this in the first place.

My moral system is also quite pragmatic. It is indifferant to the self-inflicted plight of the cattle, and finds accomodating them an unreasonable imposition. I admit, I have been called the future apocalyptic world dictator on occasion (although my rather unusual political theory my also be responsible).

You are outnumbered, though.

What's your political theory? Just curious.

A morally neutral outcome would be the result of one of two things:
- The action would not affect the derived, valid principles.
- There is insufficient information to reasonably conclude the way in which the principles would be affected.

No action "does not affect" principles. Every action would probably affect it to varying extents. A little, a lot, anything in-between. It would be simplistic to think otherwise. Hence, a morally neutral outcome can only be defined meaningfully not as one which would not affect the principles, but in fact as one which would not affect the principles more than X, where X is a predetermined threshold. It is that threshold which is problematic. Just like you can't pinpoint a moment where a small mound of sand becomes medium-sized or becomes large, you can't rationally determine a threshold between good and evil or between either and neutrality.

There's some irony here in the fact that I'm the one who is calling you out, here, for having theories that are way too simple, i.e. for unwittingly applying Occam's razor in a situation where it can't be applied. Because even though you don't realize it, that's exactly what you do. Relative morality is richer and much more complex than absolute morality. And as it stands, it is a much more accurate portrayal of what morality is and a much more powerful modelization of what it could be. To reject relative morality in favor of absolute morality is thus what I would call an undue application of Occam's razor: the former is more complex, but unlike the latter, it can represent, quantify and analyze the inherent uncertainty which exists in morality. Absolute morality is pretty lazy in comparison.

Consider this hypothetical example: The Evil Empire captures you and your comrade. You both know vital information which would destroy the hopes of humanity for ever if it was found out, you know for certain that the Evil Empire will use torture to try to obtain this information, and that he will not be able to resist. You must kill him.

Now, we alter the situation slightly so that all of the above is uncertain. The Resistance may possibly survive with the information betrayed, they Evil Empire may not resort to torture, and your comrade might not break down. You don't know the relevant probablities, and you are unable to rationalise strongly one way or the other. Killing him would be a neutral moral act.

The problem that you present is largely unspecified. The life of your comrade is arguably worth much less than humanity at large - a billion times less, if you wanted to quantify that. Even if you don't know the relevant probabilities, it is reasonable to assume that your comrade's life does not warrant the risk and thus that you should kill him anyway. You also kind of evaded my actual point, which is that you don't know how to act around the threshold. Uncertainty comes in degrees. At which degree of uncertainty, in your example, would the moral decision change? Any theory of morality must incorporate a certain dose of risk aversion - but how much? More importantly even, how do you compare the evil of killing your comrade to the evil of the Evil Empire getting their way?

Here's another hairy moral problem: would you be willing to kill every single criminal on the planet, right now, in one fell swoop? Notwithstanding the possibility that non-criminals eventually take their places, you would arguably make the world better at the cost of eliminating a large chunk of it. Or is eliminating these people an evil in itself? What would you do with a mass murderer who also happens to be the only human being capable of finding a cure for cancer (and willing to)? Would it be acceptable to torture the smartest human who ever existed in order to force him or her to find a cure for all diseases, should he or she refuse to cooperate willingly and should there be sufficient evidence that nobody else could do it?

I don't think there is any conclusive answer to these dilemmas and I don't think there could be either. There exist very rational arguments that go either way.

Actually, I don't consider all people equal. I do believe that their worth may be difficult to evaluate, and that a state must avoid making these judgements, but the idea of all people being equal is actually quite absurd given some serious thought. Is a rapist, who acknoledges the evil of his deeds but doesn't care, equal to, say, someone who sacrifices themselves for a cause they believe in?

What cause? Racial segregation? ;)

Anyway, this is not a very good example. The rapist has done evil deeds and must be punished for them in order to maintain a healthy society. So would anyone for doing what he did. When talking about equality, we're talking about "inherent worth" or "inherent rights", i.e. irrespective of one's actual actions. For example, we could say that X and Y are not equal because X is smarter than Y. Pushing this further, we could have legal non-equality, i.e. that X and Y are not equal as in they have different rights and X can do A, but Y cannot (perhaps because X is smarter than Y, is of a "superior" race, is born in a certain caste or any other reason or lack thereof).

I disagree here: objectivism is very common - it is the basis of most major religions, and one of the bases of the justice system (personally I find "maintaining a functional society" a more valid one though).

I agree, the goal of justice should be to maintain a functional society. Some elements of society are defective and we have to weed them out and deter others from imitating them, irrespective of any other factors.

Also, and this really is pontification now, as I don't know the facts, I think that much of the decried nihilims that the youth is supposed to be experiencing in many parts of the world is a direct result of the attitudes "everyone is equal" and "morality is relative". Essentially, that means we can do as we please, and nothing we do will have the slightest worth.

No, it doesn't. First, you have to realize that this is not an argument for absolute morality but indeed an argument that people should believe morality is absolute, even though this is not the case. Second, that morality is relative does not mean most people do not largely agree on it and it does not mean that anybody has to respect the morality of other people.

Morality is a system that does gravitate around a target. It does tend to converge towards something. Your mistake is to believe that the target is not moving and that it is well-defined for all actions. In fact, morality tries to optimize a moving, fuzzy target. Parts of that target hardly move and are fairly clear, which results in universal and timeless agreement on some principles like "do not kill". Some parts move because society changes - now all races are equal. Some parts are fuzzy because they involve good parts and bad parts, but it's not clear how to weigh them - but thankfully, they are usually contrived.

Furthermore, it can be disasterous. If a society accepts that their religion trumps science, you can have this sort of thing going on: (urg, lost the link, I'll try to find it and edit it in.)

Perhaps more entertaining than disasterous, but if this kind of naive trust results in famine, than the acceptance of "whatever that society thinks is best", and the disregard of any rational person who suggests something more scientific, relativism may cause a tragedy.

This has nothing to do with relativism. That some moral systems yield apparently superior results at large than some others in no way entails that there is one unique moral system that trumps all others. All I am saying is that rationality cannot derive the moral system because there just isn't just one. There are many of them that will offer contradictory advice in some situations, most of them contrived enough that it won't matter much. Rationality will be of no help to choose from these.

I'm not really intested in happiness though. To quote the work:

"But I like the inconveniences."
"We don't.", said the Controller, "We prefer to do things comfortably."
"But I don't want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin."
"In fact," said Mustapha Mond, "you're claiming the right to be unhappy."
"All right, then," said the Savage defiantly, "I'm claiming the right to be unhappy."
"Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to live in constant apprehension of what may happen tomorrow; the right to catch typohid; the right to be tortured by unspeakable pains of every kind."
There was a long silence.
"I claim them all." said the Savage at last.
Mustapha Mond shrugged his shoulders. "You're welcome," he said.

I think I'm with him. Perhaps not the syphilis, cancer, typhoid, or impotence, but otherwise I am frimly with the Savage.

Truth and Tragody, Morality and Growth, Pain and Beauty are all more relevant to me than happiness. The hive mind is not enough for me: I need Truth, and Tragody, and the Tragodies of those around me. Maybe thats just me though, and this is diverging somewhat from the discussion of morality, but this is interesting as well.

So you're saying that some "unhappiness" is required for you to be happy?

Look, seriously, if this is the way you think, what do you think the hive mind is going to give you? It will give you tragedy, it will give you morality and growth, it will give you pain and beauty. To the savage, it will give syphilis and cancer. It will shove both of you in a city populated with puppets that won't indulge you. And then in the end it will give you Truth and it will be up to you to deal with it. The point is that the hive mind won't just make you happy, it will make you happy exactly in the way that you would want to be happy. If you don't want to be happy, then you won't be, and it will be by your own fault. If you don't want the hive mind to tamper with your life in any way, it will just simulate real humans around you. In a way, all it would be doing would be to free you from your environment if you wish, hence giving you a strictly greater amount of freedom.

As for me, I'll be living in a nice house, doing what I like, writing cool programs and bestsellers, with as much money as I need and a gorgeous, incredibly smart woman by my side. At every moment, I'd have a small effort to make, but I would have support and it will always be doable. And then at the end of my life I'll get to know about the whole hive thing and I'd be grateful for everything that was given to me.
 
If I recall correctly, Kant was all for rape as he didn't think it was rape unless she ended up dead. Fuck Kant.
 
Back
Top