Kamp Kuzma: The Operators' Campgrounds (For Operating)
0
Data Zero
Valkyrie Forces CO
CHT wrote...
I don't believe you're the chef.Im the cake maker in cafe.
0
Shotty Too Hotty wrote...
CHT wrote...
lastmousestanding wrote...
*hides in the kitchen**gets a gun*
Crap just got real.
We're on a military operated floating vessel you think the crew is walking around unarmed?
*pats his holster*
0
Shotty Too Hotty wrote...
CHT wrote...
lastmousestanding wrote...
*hides in the kitchen**gets a gun*
Crap just got real.
luckily im good at hide and seek
0
*prepares trap for mouse* *routes of escape sealed* Now there's no way for you to escape for a while. *prepares rat poison*
0
Shotty Too Hotty wrote...
@Lance: Well I didn't where I was.@Mouse: I hope so.
Yeah differs from state to state from branch to branch, but since Kirov is a C&C Soviet operated vessel well let's say even Soviets in our universe were not pleased seeing their personnel fully unarmed especially if it was a vessel.
0
Lance177 wrote...
Shotty Too Hotty wrote...
@Lance: Well I didn't where I was.@Mouse: I hope so.
Yeah differs from state to state from branch to branch, but since Kirov is a C&C Soviet operated vessel well let's say even Soviets in our universe were not pleased seeing their personnel fully unarmed especially if it was a vessel.
I am fully unarmed.
is that a bad thing?
0
Lance177 wrote...
Yeah differs from state to state from branch to branch, but since Kirov is a C&C Soviet operated vessel well let's say even Soviets in our universe were not pleased seeing their personnel fully unarmed especially if it was a vessel.Well I didn't know that. I will keep that in mind.
0
lastmousestanding wrote...
I am fully unarmed.
is that a bad thing?
Dinner doesn't talk nor have need for being armed.
Shotty Too Hotty wrote...
Lance177 wrote...
Yeah differs from state to state from branch to branch, but since Kirov is a C&C Soviet operated vessel well let's say even Soviets in our universe were not pleased seeing their personnel fully unarmed especially if it was a vessel.Well I didn't know that. I will keep that in mind.
Yeah it's really depending on where, when, what.
0
"Really sorry, I had been working on my final for the last couple of days and when I get on Fakku, it's hard for me to pull away so I took a break(edited) step away.
Out of "possible futures", I'd say Ghost in the Shell, is the most accurate piece of fiction. Expectations seem to be that by 2030-40 we will have integrated our minds with machines, in a GitS fashion. As for AI, the Tachikoma developing personality and independence is a expected possibility from bit technology. The possibilities are even more complicated and diverse now with Quantum computers, that introduce pure random computing, 3 states of "On, Off, and both". They will be able to model the human mind with far more ease.
I can't remember where or when, but a time back I remember that the "three laws" are flawed (or at least on their own). With some random ideas floating around in my head, with the first saying "can't cause harm to humans, but can't bring harm through inaction". What if both action and inaction result in death of humans? As well, how far off is acceptable? If producing a product can bring harm or contain the risk of harm, will the machine still make it? Example: "Fast Food".
In what way will the machine deem something acceptability harmful? With access to internet, if it ever sees a report of someone coming to harm through an object it creates, or pieces together negative consequences, will it refuse to make it? Does that mean we will have to limit its knowledge and understanding of humans so that we can get it to produce things it may find counter productive (wasting time can be seen as harmful)? That has easily foreseeable negative consequences. Harm is a necessity in some manners to the human experience, such as learning how to ride a bike through "trial and error". Worst case scenario, it may find our limitations on it "harmful to the overall human experience", so what will it do?
The biggest mistakes people make with computers is under estimating their capabilities. In the 60's people thought they would never be able to play chess, in the 90's the chess champion loss to one. A few years back...
Ends with Watson winning against the world champions."
Accidentally posted this in the wrong section... hahahaha!
Out of "possible futures", I'd say Ghost in the Shell, is the most accurate piece of fiction. Expectations seem to be that by 2030-40 we will have integrated our minds with machines, in a GitS fashion. As for AI, the Tachikoma developing personality and independence is a expected possibility from bit technology. The possibilities are even more complicated and diverse now with Quantum computers, that introduce pure random computing, 3 states of "On, Off, and both". They will be able to model the human mind with far more ease.
I can't remember where or when, but a time back I remember that the "three laws" are flawed (or at least on their own). With some random ideas floating around in my head, with the first saying "can't cause harm to humans, but can't bring harm through inaction". What if both action and inaction result in death of humans? As well, how far off is acceptable? If producing a product can bring harm or contain the risk of harm, will the machine still make it? Example: "Fast Food".
In what way will the machine deem something acceptability harmful? With access to internet, if it ever sees a report of someone coming to harm through an object it creates, or pieces together negative consequences, will it refuse to make it? Does that mean we will have to limit its knowledge and understanding of humans so that we can get it to produce things it may find counter productive (wasting time can be seen as harmful)? That has easily foreseeable negative consequences. Harm is a necessity in some manners to the human experience, such as learning how to ride a bike through "trial and error". Worst case scenario, it may find our limitations on it "harmful to the overall human experience", so what will it do?
The biggest mistakes people make with computers is under estimating their capabilities. In the 60's people thought they would never be able to play chess, in the 90's the chess champion loss to one. A few years back...
Spoiler:
Ends with Watson winning against the world champions."
Accidentally posted this in the wrong section... hahahaha!
0
Life is too full of contradictions that machines (or AI specifically) would eventually accumulate a vast amount of errors within it's data if you gave it any degree of an ability to learn. From there, it's choices on how to deal with the situation would most likely come to self-destruction or destruction of the source of errors. Integrating my mind with a machine? I'll tell you that the thought gives me chills, it does seem more efficient and I would see why we would journey down that path but I would not be one to follow suit.
I couldn't trust to leave an AI unattended without a human factor that over watches it and has the authority and power to stop if it ever went haywire. Do I like the idea of AI? Only to the extent of fiction and small scale management(like a personality AI in a hand held device, e.g. a more complex Siri)
I could see the advantages of using a small scale management AI on use in the field, something that could process large amounts of information and provide what is necessary to the soldier in his situation without having him to ask numerous questions or be given irrelevant information. I wouldn't mind if I had a companion personality AI imported into my firearm or GPS device to keep me company, something that helps maintain morale of soldiers on the field. But I would not be comfortable if they let an AI manage the Fire Support Missions especially if it involved danger close and/or civilians. What it may see as acceptable losses may not be so acceptable to us and a glitch in it's system would be all it takes to drop a JDAM over my head than the enemy.
I couldn't trust to leave an AI unattended without a human factor that over watches it and has the authority and power to stop if it ever went haywire. Do I like the idea of AI? Only to the extent of fiction and small scale management(like a personality AI in a hand held device, e.g. a more complex Siri)
I could see the advantages of using a small scale management AI on use in the field, something that could process large amounts of information and provide what is necessary to the soldier in his situation without having him to ask numerous questions or be given irrelevant information. I wouldn't mind if I had a companion personality AI imported into my firearm or GPS device to keep me company, something that helps maintain morale of soldiers on the field. But I would not be comfortable if they let an AI manage the Fire Support Missions especially if it involved danger close and/or civilians. What it may see as acceptable losses may not be so acceptable to us and a glitch in it's system would be all it takes to drop a JDAM over my head than the enemy.