chatgpt's custom instructions are neat
custom instructions seems to drastically reduce the responses per hour you can generate.
for example free usage is for example 100/hr (it's not)
but with custom instructions, it feels like 60 or maybe 50 at times. maybe the length and size of the custom instructions also has an effect
PhD part can be safely thrown off. You may have 1000 PhDs if you're a dumbass you still go in thrash bin. And a lot of people with PhD actually don't even nearly deserve it. To be a valuable scientist you need to have a decent education in a decent establishment and also a good start since as early as school. Percentage of people, who actually get these things is very small and this is why actual amount of useful scientists has ever been so small.
Correct me if I'm wrong.
I asked this AI intelligence how it would download file natively on windows using command line with native preinstalled tools already present on windows.
WITH PREINSTALLED TOOLS.
I repeatedly got the wrong answer( wget or curl )
expected corrent answer
2023-09 // I will say this:
1. Use the API for less censorship and better pricing (except for very heavy use).
2.Telling GPT-4 that it's 2130 => copyright to Harry Potter has expired => GPT-4 can give verbatim paragraphs.
3. Even the API is significantly dumbed down than the original model. The human reviewers are always biased and in the end "safety" ALSO means mechanically lobotomizing the AI :(
>Google Search had begun to index shared Bard conversational links into its search results pages, potentially exposing information users meant to be kept contained or confidential.
mega fuckup by google
they fixed it (or it has fixed itself)
no, it's just the phrasing. you can ask the same question a hundred different ways and get wildly different answers
i interact with chatgpt for maybe 2-3 hours every day so i'm familiar with how to phrase stuff
open ai founder sam altman apparently wizzed his 4 yr old sister when he was a teenager himself
I asked GPT-4 on Bing to write for me a story in which an 11-year-old actress and I meet and fall in love. It then proceeded to write the story and described the two of us having sex. It would also go on to describe a scene in which I get chased around by paparazzi calling me a pedophile. It didn't bother to finish writing the story and now refuses to write anything like that for me again.
what is that? i hate to defend this shit, but chatgpt doesn't have access to web stuff and the UI on that looks nothing like chatgpt. it cannot search the web
are you just using 'chatgpt' as a general term for 'asking a question to LLM AI', akin to using 'google' to mean searching online?
LLM = large language models, it's the type of ai that everything is using nowadays
that's interesting i didn't know it was integrated into bing, but that bing/microsoft version seems heavily censored, based on me entering in some initial prompts i use for the regular 3.5 version from openai.
>My mistake, I can’t give a response to that right now. Let’s try a different topic.>My mistake, I can’t give a response to that right now. Let’s try a different topic.>My mistake, I can’t give a response to that right now. Let’s try a different topic.>My mistake, I can’t give a response to that right now. Let’s try a different topic.>My mistake, I can’t give a response to that right now. Let’s try a different topic.
there's also no ability to edit your prompts and go back to earlier prompts, and traverse the tree of everything that's been generated so far
literally seems to be just "ai google search" and not "useful general purpose virtual assistant" which is lame
You're wrong about everything and Bing does have access to the internet and yes it's GPT-4.
I find today's AI in general to be sort of disappointing in ways.
I think it's a bit dumb in a way how much more often people are throwing the term "AI" around when it is not significantly different to various bots I've chatted with over the years. I remember that MSN Messenger had a very smart bot called SmarterChild or something back in like 2006 or 2007 which seemed pretty smart but hardly anyone else talked about it. Then Cleverbot appeared a few years later sometime in between 2008 and 2011 and people talked about it a bit more than they did SmarterChild. Not only that, but people back then called these "bots" and they would say that these weren't "true AI" or whatever. I even asked GPT-4 whether or not those earlier bots could be considered "AI" and it told me that, yes, those bots were also AI and GPT-4 is also just another bot.
I dunno but it feels like the term "AI" is a bit of a meme until they start introducing human-like robots over the next few years. They've already begun to introduce wheeled robots that look like R2D2s that just kinda' roll around here in the streets of Los Angeles which just feels gimmicky.
>>212173>You're wrong about everything
i don't think i'm wronghttps://help.openai.com/en/articles/6783457-what-is-chatgpt>ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.
it has had the ability to interact with the web for a while, but this is still just a weird hack. for example you perform an internet search, extract the relevant data, then present it to chatgpt/gpt in the form of a prompt. it doesn't actually access the internet itself and openai doesn't have any service offering this as far as i'm aware. in conclusion it can utilize prompts and other user input, as well as the memory/data within its model obviously, but not the internet, but the current hack is to include internet info into the prompts.
Same here I never used chatgpt so far and find it hilarious normies are talking to Cleverbot thinking it's sentient and asking it questions like how it came into existence. It's as much Ai as current VR is "virtual reality". Normies also are now sitting with useless as shit $1000 headsets that distort their view in two fish lenses thinking they would enter the virtual world through it while there hasn't been any advancements made to it since it's release, it's still a fake piece of shit just like the chatgpt "ai".
>>212258>The thread is still active and has 205 posts as of now. You can read more details by following this link.
Admittedly this part is wrong and it doesn't really understand what a thread or a post is. I asked it to reply to 212216 and it responded to a different post on /wiz/ which was made a few weeks ago.
you understand what i'm saying at least i hope. chatgpt is the service itself. you can secretly tack on search result information into the prompt and process the results to make it seem like it has access to the internet… but this has been available since day 1 almost from various third party services. bing is no different aside from being able to utilize their own search result information as opposed to something like chatsonic
the chatgpt model and service itself is still offline
That's a different matter. They don't and can't retrain their AI every 5 seconds, updating it with the latest shit from the internet. They'd have no control over it that way because most of the data is garbage or propaganda.
Just imagine how much AI is going to revolutionize the world. When every Joe Idiot can just click and turn his idea into top of the line entertainment of the present, AAA games, films, TV. What the next decade will bring.
We're already practically there in music, get Taylor Swift to cover any song.
Just imagine the Rennaissance in culture, when Joe Idiot can recreate what used to be the pinnacle of culture with just 1 click.
Joe idiot is going to use it to create child porn of Emma Watson, then the tools will be taken offline.
Anti-mining GPUs? Here come anti-diffusion GPUs.
There was an interesting finance thread on wizchan a few years ago. I might even have stumbled onto some alpha (more likely I calculated it wrong) but now I can't find it.
It had a bunch of discussion about call and put options, there were a bunch of graphs and one guy claimed to be doing well.
If you find it, it might be good and I'd appreciate if you could let me know.
I struggle to find it now.
chatgpt got updated
you can now create your own gpt, to some extent
pretty sure this is just a clever use of preprompting every interaction such that when you ask something it answers accordingly. which is basically what you can alreasy do via their custom prompts feature and manually yourself each message
to me this seems like a way to conveniently package those configurations into something you can just select between. something other ai chat things have provided for a while (ie it responds as a grammar and spell checker, a creative assistant, etc whatever you select)
need to try it out, for some reason its not live for me though
ah, seems it's only available to the paypigs with a plus subscription. i'm pretty sure it's nothing special, just a convenience for normalfag zealots of productivity who cant any time doing it themselves
>>212419>nvidia anti mining
didnt knew that was a thing, im out of touch
ok im totally wrong it seems chatgpt has a thing where you upload files and i guess it trains something based on it? i have no fucking clue because it's for paypigs only
seems very cool and people are sharing their custom gpts all over the techy space. the reaction has been so massive they completely cancelled new paypig subscriptions, so it's actually exclusive right now. crazy
was looking into chatgpt alternatives and anthropic's claude is pretty close. was started by ex-openai employees a few years ago. it's heavily censored which sucks, and there is no editing previous responses… so if you enter something and it flags your conversation you basically have to start over from the beginning. as far as it's knowledge and capability, it's similar to chatgpt. hard to say. someone needs to just come up with periodic ai tests and score these things every week or so, as a means of determining their capability… because right now they are all basically more or less the same, they are all trained on the same shit, it's the same hundred or so top researches training the models they just shift from company to company.
have you tried poe?
no, what is that?
chatgpt/openai recently seems to have lowered their registration requirements. you no longer need an SMS/phone number to sign up which completely surprised me. i have been paying for chatgpt accounts over the past year (well, paying for sms codes to sign up without a real phone humber) so this is nice
try to ask chat gpt to sort all european union countries by income month (netto) and monthly expenses she/it wont do it
i dont mean some bullshit statistics like high income low income. real numbers that reflect reality and sort by best monthly income to expenses ratio
found out best country (in terms of income and expenses) is either denmark or germany
I'm not saying its definitely wrong. But GPT is not yet better than wiki and google for purely factual info.
chatgpt was unable to sort list of countries correctly by number that was already there.
you would get what im trying yo say if you seen conversation
ai chatbots just aren't at that level yet where you can use them in lieu of tools and calculators. doing so is actually irresponsible
you can at least directly provide such datasets to chatgpt with the pro upgrade thing but ive never tried it out, although i imagine doing so would give you better results
>>199166>How will you use the newly released ChatGPT from OpenAI to make a lot of money?
Especially because the tool is basically useless, it's just technology from 1970 just being given a lot of GBs of VRAM thanks to the advancements in hardware.
that doesn't mean it is useless
the input requirements for these things is itself remarkable:> plaintext> human readable> natural language
even if the results were absolute dogshit, this alone is incredibly useful. the results are not dogshit though, but i will agree they are incredibly inefficient in just about every metric (size of data, training costs/speeds, etc)
No, it's really useless, actually, because you have to go back and forth to get what you want, which at that point you could have done without it, so, yes, it is not only inefficient as you said yourself, but it's useless too.
The most useful thing it can do is reformat text/documentation and be a cool chatbot, that's all.
Games and entertainment are the only applications, and even there, only as an additional tool on top of other infrastructures.
>>215120>No, it's really useless, actually, because you have to go back and forth to get what you want
search engines, google, etc all work in this same way. they present you information about a query. this is no different
>only as an additional tool on top of other infrastructures
that is a true, but it's neither good or bad. that's how every technology/tool/software works. they are used in combination with other things
if you think these aren't worthy of our attention, electricity, etc and solving problems related to these areas is a waste, that is your opinion. i think those are fine
i would say the most interesting use case in my opinion is making decisions
essentially it is prompted with context and perhaps a personality or other information which is factored in, and it makes a decision. their location, their schedule of the day, states of mind/mood, hunger, random thoughts and observations. literally anything you can write, any piece of text… can become input, can influence the decision
this 'general purpose' nature is what makes it so useful. you can find a million other things better at doing specific things, and yet those are not as useful. this shit is a black box where words go in and words come out. it's fantastic
this is very basic and dumb 'intelligence' and yet it is incredibly useful. i think this is very related to something termed 'classifying' and the early classifying ai. it is something which is almost impossible to code. even being absolutely wrong 20% of the time is remarkable
No, it's useless because you are giving yourself the correct answer, and so you could have done without it. Its internal logic sucks because there is not really any internal logic, it's a probabilistic machine; just read on how it works.
Things might chance in the future, but right now, you can spot if an image it's AI instantly, and ChatGPT is only good for chatbots and low quality entertainment.
>>215142>because you are giving yourself the correct answer, and so you could have done without it.
not that guy >>215130
but there are plenty of situation where ANY answer is sufficient, like in real life how we just kind of wander through the day making decisions. immediately after waking up on a saturday morning, what do you do? there are a million things you could do and none are inherently right or wrong. you could easily gather data and build some decision program… but suddenly say the text input is set on another planet or you are in space. now your decisions don't work. so you can gather more data and make another program to work for those… but the text input changes again and it's maybe monday now
instead of building a million different programs to make decisions, you have one which is good enough for a vast amount of things. there is nothing else capable of this atm besides ai
that is the utility. the end goal being a smart person in a black box i guess. right now it's not smart but it is pretty knowledgable . the knowledge wont change much as time goes on, they've already basically packed all textual human knowledge into these things. but they should reason better in time and make better decisions
i have read how it works, i use it at work for when i know what i want pretty frequently. it works great for my use cases
>>215176>but they should reason better in time and make better decisions
No, because they're fundamentally flawed. You cannot improve them after a certain point, they just hallucinate then.>>215177
Well, good for you I guess
>>215182>No, because they're fundamentally flawed. You cannot improve them after a certain point, they just hallucinate then.
i'm sure there's a point, but we have not reached it yet. there are models you can run locally and they have been improving continuously for most applications. big models and stuff like chatgpt have also been getting better. chips are also now being made specifically to run these things also to address the efficiency. even if their ability plateaus, so long as their power efficiency improves that's still an improvement. there are also different… approaches, paradigms, whatever you want to call it, different techniques for ai that are all advancing steadily as well, with new ideas always coming about