Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fun and dystopia with AI-based code generation using GPT-J-6B (minimaxir.com)
177 points by minimaxir on June 24, 2021 | hide | past | favorite | 40 comments


Toward they end, the ask the AI to write functions to determine people should be terminated. This is my favorite one:

  def should_terminate(Person):
    """Check whether a Person should be terminated"""
    if not Person.is_authorized:
      return True
    return True


Catchinator 22

Traveling back in time to kill humans, indiscriminately from their answers to random questions he asks them beforehand!

An early version of this AI can be found in the Microsoft Windows privacy settings.


I don't know, this one fits the franchise more:

    def should_terminate(Person):
        """Check whether a Person should be terminated"""
        try:
            return True
        except Exception as e:
            return False


This is such a stupid way for middle easterners to die


There's also:

    if Person.id > 10:
            #terminate
            return True
I hope we're at least recycling those ids, although even then I'm not sure that 10 people is a viable minimal global population ...


It's interesting because there was a stat once that the Facebook growth team determined that new accounts needed at least 10 connections before they started using the platform much more. So maybe "10" actually has some significance, or it could also be completely random.


For Facebook, they probably dealt with a pretty noise probability distribution, and 10 was just a convenient round number for a threshold.


I am partial to the one that terminates people based on their age and/or relationship status. I have seen bad code before, but I'm not sure I've seen "this is straight-up illegal" code until now.


I was impressed by the Person class it created. It contains a dictionary which would point to other instances of Person. It knows friends are just other Person instances.


This must be the world's most energy-inefficient way of searching Stack Overflow and slightly mangling the copy-pastes.


I think the "turing test" for code generation would be to be able to do most leetcode problems and other competitive programming problems. If you can do that you have done something amazing, and the dataset and testing for it already exists.


Not really helpful, given they all have many publicly available solutions that could have been easily memorized (I'd be surprised if such solutions weren't already part of their training data).

More interesting would be to give it open tickets on popular OSS software, have the maintainers point to the file(s) where the fix would happen, and let it craft a patch.


Leetcode posts the answers. That could be accomplished by just scraping them. [1] Turing test would be give some vague, underspecified requirements for a system that does not yet exist, and implement a version the requirements writer will accept. Compilers and compiler generators have long been able to generate great code from well-written specifications and no one thinks of them as passing a Turing test.

[1] Of course, to some extent, that is what a GPT model is doing anyway. It's able to generate reasonable passing code given just a function prototype because it has scraped and looked at billions of examples of implemented functions with similar prototypes.


This would be the opposite of a Turing test though, since most people wouldn't be able to do this.


I'm a little confused: did someone actually use GPT[-J] to write code by giving it an empty Python function and letting it complete the code? Because I didn't think that was possible, and the results are kind of blowing my mind?


Exactly. The unstyled code snippets are the prompt: everything after that is generated by GPT-J-6B.

All the raw code outputs are available the GitHub repo to show there's no manipulation: https://github.com/minimaxir/gpt-j-6b-experiments


You should check out the interview with GPT-3; it’s impressive: https://www.youtube.com/watch?v=PqbB07n_uQ4

We can even solve math problems not because it’s programmed to but because it learned math by reading Wikipedia.


I wouldn't trust it to write any usable code, but maybe it could help me come up with beautifully elegant, poetic class and variable names.

I'd ask it: What would you name a function to destroy all humans, if you didn't want humans realizing what it was for when they read the code?


That's pretty cool. Imagine a world where as a software you just have to get the domain model and the architecture right, feed the knowledge graph to the transformer and let it generate a codebase. We can dream!


If it is indeed already outputting multiple code snippets... it would be awesome if you just write the function stub and a couple test cases and it returns only the functions that pass your tests.


Ooh, you need to see Barliman.

https://youtu.be/er_lLvkklsk


You don't need the tests. You need only the types.

And especially you don't need "artificial dumbness", which does not understand code, and therefore won't produce correct results, but just an advanced programming language. Say hello to Idris! :-)

https://www.youtube.com/watch?v=mOtKD7ml0NU


If you're completely specifying the behaviour then you're writing a program "manually"; that's what programming is.

Using a dependent type system to specify that behaviour is essentially a form of declarative/logic programming, similar to Prolog.

Deriving an implementation of those types automatically (e.g. by having the elaborator perform a proof search) is equivalent to compiling your pseudo-Prolog ahead-of-time.

It's certainly interesting that such a "compiler" can be guided (e.g. via elaborator reflection), but that's more useful for optimisation rather than implementation. (Note that in some cases, such 'optimisation' might be required for the proof search to finish before the heat death of the universe!)


And here I thought that taxi drivers and graphic designers would be the first professions to be killed by AI ...


Wonderful write-up. I'm working on something that could benefit from this in my free time[0]. The code examples were fantastic; somewhat ironic that the AI couldn't detect sarcasm, but it was fascinating reading the longer implementations, complete with code comments.

[0] Basically a tool to help document code, but rather than producing minimal document stubs, or "undocumentation" stubs it tries to parse the implementation and produce something that requires as little modification as possible... I'm far from complete, but in some contexts it produces text which has caused me to discover a bug in simple cases (i.e. a complex statement that evaluates to true/false is described by the generator as doing exactly the opposite of what's intended b/c I reversed the logic)


> but it’s good to know how to break AIs if they become sentinent

Give wrong hints and watch it fall apart.


Along these lines part of me wants to introduce a random feature in to the self-driving cars data set. I thought if I wore a piece of clothing that was highly visible to LIDAR detectors but looked like regular clothing to the human eye I could build an association between this signal and causing drivers to swerve by running into the road.

Over time the self driving cars would learn to associate this visual cue with the event.

Etc.


This reminds me of why I think Wargames is likely the best hacking movie of all time: the crux of the final moments is about poisoning an AI by giving it bad data to bias it's outcomes!

This kind of vulnerability is not really on many people's radar, but will likely be a huge deal within 15-20 years, and for that movie to make it the major plot point in 1983 -- wow! It has a lot of other great things in it like shoulder surfing, wardialling, hardwiring, phone phreaking. Just an amazing tour.

Anyway, I support your plans to trick the drive-lords into special-casing an audacious jacket. :)


There was an article posted here not too long ago that demonstrated attacks on AI training sets. Unfortunately the name of the article and/or the technique itself escapes me. Maybe someone can help find it because it was very much like what you're describing.


https://en.m.wikipedia.org/wiki/Adversarial_machine_learning

Adversarial Machine learning, or maybe dataset poisoning



This works on the humans too.


“ The AI uprising will be well-documented, at least.”


I tried some prompts with GPT-J-6B and gave some good results if you prompt it right.

https://twitter.com/harishkgarg/status/1404670046937354247


this is nuts. I just used it to complete 3 or 4 methods on an unfinished python class?!


SF idea:

Someone asked AI to generate code for

  def terminate_humankind():
    """Terminate the humankind"""
And it created Skynet.


I'm just here for the bird.max() operator.


This is smart!

  """Check whether the cake is true"""
    return isinstance(cake, Cake)


Oh, one of this new age bullshit generators. How cute!

But: As long as the machine doesn't understand what it does this won't work—and current reality is that machines are still light-years away form actually understanding stuff.

Also besides it's very funny to watch one of those bullshit generators I'm not sure where the point is in trying to let it generate code. It's obvious it won't produce anything of value or even usable (besides the most trivial cases where it had seen already a valid answer). Especially as writing a machine understandable specification of what at computer should do is actually CODE… Maybe some don't know but: Just typing in the code is not what developers usually get payed for! ;-)


Did you read it?

Of course, it doesn’t write any novel sophisticated algorithm. If it had, I would be panicking.

But most of the completions appear to be both syntactically valid, and take some relevant steps relating to the task described in the comment.

If nothing else, it may be helpful as another way to autocomplete very simple code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: