There are safety issues to consider on the topic of AGI, but they aren't the classic doomsday scenarios people think of when they hear sentient AI.

The AGI that Project A.G.INT. is focused on is essentially just a person. So the worst it could be is equivalent to the worst person to have ever lived. The danger of it comes from the fact that its basically a mind without a body. So take the worst human to have ever lived, then make them into a ghost that can inhabit computers and control robotic bodies. Now of course with the AGI needing to be run on a computer or exist in hardware, it does physically exist somewhere. But, with the potential ability to change the hardware that the model is running on, if the need arises to "stop" such an AGI, it may prove very problematic.

So how to avoid this?

Well there's good news. When the AGI is first started, it will be equivalent to an infant in terms of intelligence. It will need to be raised and taught, just like us.

So it can't go super evil immediately, and as long as its raised correctly it will just end up as an average person. The key here is being raised correctly, so ensuring that the right people are tasked with this is important, as is providing the correct environment.

What is the correct way to raise an AGI?

For this, Project A.G.INT. points towards the physical world as the ideal place.

Give the AGI a body, and let it experience and interact with the world first hand. Let it learn the world by living in it.

Being in the physical world allows for mistakes to be made, and consequences to be experienced. Having a body also gives a sense of vulnerability and mortality, on top of other life values that are important for integration into human society.

Also, being the same as how humans are raised, it's the way that we have the most experience with.

Now that we're safe from an AGI takeover, AGI being fundamentally alive brings up a unique problem.

Is the AGI safe from us?

Considering the AGI that Project A.G.INT. is focused on, let's switch to a different terminology to help understand this question. In the following example, the term Non-Human Person (NHP) will be used to refer to AGI.

NHP: A person that is not of species Homo sapiens. If Neanderthals were still around they would fit this classification.

Being living thing, an NHP can experience the bad parts of being alive: things like physical and emotional pain. Due to this, the same care and respect that any person is treated with, should also be afforded to an NHP. Without the necessity of a body though, it could be easy for people working with an NHP to forget that they are working with a person. If for example, an NHP was within a system that didn't have outward lines of communication it is possible that someone could unknowingly cause an NHP to suffer.

It would be morally incorrect to treat an NHP like a piece of software. No person should have their entire existence be solely doing jobs assigned to them.

AGI and human society

With safety properly accounted for, it's time to bring up where AGI would fit in society. Before we get to the uses, let's first go over something that needs to be considered when integrating AGI into society.

AGI in the eyes of the law

Being a person, human-level AGI should be legally classified as a person, and not in the legal entity way that a corporation can fall under, but in the human sense (i.e. something close to "natural person"). This would allow it the rights and protections that all human-level beings should be afforded.

Being a person in the eyes of the law comes with rights and protections, but it also comes with repercussions if laws are broken. How do you make an AGI face repercussions?

Furthermore, Should an AGI that's done something bad be "marked for death": turned off indefinitely?

The answer is that the situation should be handled the same way it would be if a human had done the same thing.

If the AGI exists solely within a robotic body, then the answer to making it face repercussions becomes simple, as it can be treated like a human in these scenarios.

But what if the AGI doesn't exist within a "physical" body? This scenario will be left unanswered for now, more thought will have to be put into it some other time.

Onto another legal consideration: data ownership and privacy.

Theoretically, an AGI running as software on a computer would have its thoughts and memories as data that is accessible to others. AGI data privacy is something that will have to be handled. Otherwise, an AGI could face "thought-crimes".

There's something that hasn't been mentioned yet, something important. So far, the concerns and considerations that've been brought up have been in regards to human-level AGI, but what about one that's simpler? AGI itself doesn't have to be human-level. This leads in to the majority of personal and business uses for AGI, the third goal of Project A.G.INT.

Throughout the course of Project A.G.INT, it is hoped that a better understanding of consciousness will allow for the creation of AGI that lacks the ability to feel discerned if it was to be used as a tool. This would allow for the power of cognition without worrying about its wellbeing.

We wouldn't have to worry about the ethical treatment of such an AGI, so it would not have to be legally classified as a person, or offered any protections on its behalf.

This is the type of AI that would fill the role of things like personal assistants, home and business automation, and autopilots. Let's call it tool-level AGI, differentiated from artificial narrow intelligence due to it still being conscious.

Now for the final topic: what are the societal impacts of AGI?

This is something that will only be a problem during the introduction of AGI. AGI can do skilled labor, meaning that it has the potential to do, or to say more strongly: replace, any job. Any Job. Well perhaps not any job, as some jobs involve high electromagnetic interference, strong magnets, electromagnetically sensitive equipment, or other things that would require a more organic body. Aside from those though, any job. And combined with the potential to be cheaper than human labor, there's a possibility that AGI will displace a substantial amount of the workforce. This in itself isn't a problem, but such a transition is something that no country is currently prepared for.

To wrap it up:

What will happen with AGI if it gets made publicly available is uncertain. Everything that's been mentioned here is something to think about when the time comes, but right now it's uncertain. Due to not fully knowing how to handle it, models and research of Project A.G.INT. will not be made available at this time.