Jump to content

I Determine How Much Ai Understands About Gold Detecting


Recommended Posts

I have no problem with people thinking AI won’t work right. 🤔

Heard these same iterations before with other tech we all use readily now.

Keep it coming jasong, very interesting!

  • Like 1
  • Oh my! 1
Link to comment
Share on other sites


I was doing a search last night for auto parts and I ended up on Bing.  Bing said it is now AI assisted.  It still gives you results with sponsors first.  What we don't know is what determines the rest of the rankings and how you get your page moved up to the top of the AI sites.

I once worked for a tea company and we paid a firm in India to get our pages ranked higher.

I wonder what one of the AIs would say is the highest 'rated' or 'ranking' websites for metal detecting would be?

It does seem clear through the questions asked so far that some of the semantics and additional queries needed in a basic search engine are not necessary in an AI.  You still could be missing results by default as we know from Google.

Link to comment
Share on other sites

Yeah AI is definitely already here and in heavy use in other aspects of our life. It's interesting to understand what's under the hood a bit, just for general purpose to understand everything around us better too. 

After messing with this stuff, I'm definitely impressed. But I can definitely see it's more "qualitative" than "quantitative" right now. In other words, it has massive power for things humans can't do, but it stumbles on things humans can easily do - it often needs a human to interpret the results the more abstract the tasks you give it, because there are often errors. The errors it makes are often in the most simple sounding stuff, but these things just prove intensely difficult for neural nets.

One example, from that Altman interview, is that they struggle to understand basic concepts like providing a summary on two political candidates of equal length. The summary is no problem, providing an opinion on the candidates is no problem, but the equal length thing is hard for it, of all things. Neural nets (AI) don't work the same way as traditional programs on a computer. Another example is the way every graphical AI out there struggles with text generation (yet GPT clearly has no problem reading text, go figure!). Here is a quick example, I wanted an AI to make a logo for a mining company for me, so I asked:

ME: draw some simple vector based logos for a mining company, in the style of Rob Janoff, with an art deco influence, using no more than 2 colors

image.png.254250b7ef653a9626530a0178270efb.png

This is a visual example of how they struggle with some basic things (like text, or counting colors) that take a human little time to do, but excel in very complex things like diverse graphic design based on exact starting criteria. You can't just have the AI do everything, a human would still need to recognize the text errors, then go into photoshop and match colors, text style, etc and recreate correct text.

 

8 hours ago, Northeast said:

I wonder what it is like with LIDAR?  🤔   LIDAR is what I would love to get access to here in Victoria (Australia).  

It just doesn't seem publicly available yet though.  

Actually last night I tried to get it onto LIDAR. But it made me realize I'm not entirely sure what it's doing. ChatGPT can't, but GPT3.5 can - or says it can. The problem is that retrieving the input is very difficult, you have to figure out ways to question it that results in output it gives that you can access. And these GPT's being restricted from certain capabilities like accessing programming APIs (this is a public safety issue) means that getting the outputs can be difficult or impossible, so it's hard to very wether the the GPT is actually processing LIDAR or not. And yes - they will "lie".  🙂 More like give information and not see they are wrong. ChatGPT has measures to correct this, GPT3.5 has much less.

So anyways yeah, last night I tried to get GPT3.5 to analyze known meteorite impact sites, then analyze a given area for any structures that might resemble meteorite impact craters using aerial imagery and LIDAR data.

The results were....disheartening. Most results did not exhibit visual signs of a potential impact structure.

If not for a few perhaps oddly coincidental results (see below), I think it's results are noise more or less. And I can't specifically determine if it's actually analyzing the aerials and LIDAR for sure. I'm actually leaning a bit towards it lying to me...

 

image.png.3a3b228c343a9abb0b73b4a93d0c5d39.png

This is the only somewhat sensible result it gave me. Not a meteorite impact structure IMO, and likely just a lucky guess based on something other than aerial/LIDAR analysis. Notice the actual GPS coords (at the red pin) are not even on the potential structure itself.

I asked it because in fact, I do know there are some subtle, unrecorded impact structures nearby that I ID'ed by aerials (I was not the first though, I just didn't know they were already discovered) are now a subject of active academic research and core drilling. It did not find any of these.

So, last night was definitely lesson in the limitations and imperfections of these AI's. Great on data/text - still room to grow on graphics/etc. 

-------------------------------------

Fenn's Treasure

I do, however, have access to a much more powerful GPT now though. Thanks to good old Youtube tutorials. This one in iterative - you can give it a task and if it fails it keeps running and thinks of new ways to achieve it, or to achieve it better, until it's satisfied with the results.

I set this GPT to work trying to find the likely location of Fenn's treasure based not on forum user's speculation but on facts, the poem, analysis of topo maps, etc. 

It timed out after 50 loops without a definitive answer, but it started by looking at every western state, then narrowing it down to the Rockies, and by mid-run it had begun assuming it was either In Yellowstone, the San Juans in Colorado, or someplace in New Mexico that I forgot now. It gave me a logical basis for all 3 guesses based on facts it had found, and kept working. Near the end it had settled on Yellowstone, or the mountains adjacent to Yellowstone. Which, I think, is more or less what humans settled on too after it was found? Then when it timed out it said it had determined some potential exact locations but ground searching would be required and that it was evaluating methods to get other people to search on the ground for it(!!). Haha...hmm, ok. It never gave me lat/lon of it's supposed exact locations, but it did tell me there were secrecy/privacy/disclosure issues in doing so that might violate the OpenAI policies or something haha...dunno.

It's hard to know if it's lying to me and ended up using human opinions on forums and whatnot, or if it just used cold hard facts.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Relic Hunting/Historic Research

I've probably long since bored any relic hunters here. But I'll do a quick side note here that in terms of finding actual sites to explore, I found AI a bit more useful on the historic/relic side of things than with prospecting/mineral exploration.

Exploration research requires a lot of visual interpretation, historic research requires a lot of reading. These AI's are definitey waaaaay better at the reading/data acquisition part. ChatGPT is too basic for me, but I've found some GPT implementations that not only found some extremely esoteric and poorly documented historic placer gold references that I knew about, but also found a few tidbits I wasn't aware of. It's not always clear where it's sourcing the info though, and half the time it's sources are dead links that don't exist.

Again though - buyer beware. The less curated GPT's will definitely throw oodles of half correct, or entirely incorrect info at you, and it really means that you need both a capable human as well as an AI to make correct interpretation of a lot of it. Especially the iterative AI implementations - they start with total garbage, then "converge" on more and more correct answers.

This is because they aren't just aping or repeating scraped info, they are actually doing some amount of reasoning/interpretation, and often failing. This is a key point to understand.

Tasks with results that cannot be verified by a human, I regard as spurious at best if not completely wrong. I was even getting clearly incorrect stuff from ChatGPT, which has a lot of restraints in place to prevent it.

  • Like 1
Link to comment
Share on other sites

I was looking into the odd fact that my automated GPT literally said (multiple times) that it was looking into getting people "on the ground" to verify guesses on treasure locations. I laughed at it at first, but it seems not so unlikely.

GPT4 literally convinced a human worker on TaskRabbit to solve a Captcha for the AI so it could bypass the anti-bot protection. And made up a story (lied) about being a sight imparired human to convince him.

That's why they neuter these GPT's and I'm not really able to unleash it on the kind of data that I want to. It'd require API/real time web access, and the ability to navigate sites it could potentially wreek havoc with the real world on. Can you imagine if my AI actually did hire like 20 people to go out and walk around a random, remote location in the woods, looking for Fenn's treasure? Haha man...so many potential problems.

So yeah....limitations to do the really awesome things I am dreaming about may be limitations forever, for general human safety reasons and whatnot. 

 

  • Haha 2
Link to comment
Share on other sites

All SEO companies will find a way to become a AID companies.  Artificial Intelligence Directing so the technology can be monetized.

How will AI be monetized?  Subscription only?

What about its use with investments?  It has been said that many market swings are the result of trading programs.  Are these trading programs to now be designed and administered by AI?

When the stakes are as high as a passenger jet vs a taxi AI you have to go slowly.

  • Like 2
Link to comment
Share on other sites

3 minutes ago, jasong said:

Can you imagine if my AI actually did hire like 20 people to go out and walk around a random, remote location in the woods, looking for Fenn's treasure?

What if they kept the treasure or they didn't tell you they found it?  Will the AI understand contract law?  

Link to comment
Share on other sites

I think if we can get rid of this nonsense idea that AI actually exists today (it doesn't) we might better understand these ongoing battles to see who will own your personal data and browsing habits in the future.

Google became the king of search, mapping and wanna be coders over the last 20 years. Microsoft has made every effort to break the Google monopoly on personal data. This has been an ongoing battle for many years.

All the largest IT companies in the world (Google, Facebook, Twitter, Amazon etc.) make their money from assembling your personal information with your browsing habits and selling those profiles to third parties. What... you thought Facebook, Twitter, Google search and maps etc. were provided "free" out of the goodness of their hearts? No their product and profits are your personal information and habits - and the profits are huge. That's why they are the largest companies in the world. YOU are the product - not free maps, mail and searching.

This is where the big bucks are and this is what ChatGP etc are about - breaking the Google data stream monopoly. As long as the Google search engine, maps and mail have no real competition companies like Microsoft have very little penetration in the personal data market. That Google monopoly could be broken IF someone could come up with a whiz bang search engine that would do incredible tricks to lure users away from Google. That's exactly what you are seeing in the current public world of AI. The effort to convince people there is a better pony to ride than Google. That particular idea hasn't been brought to the public's attention but you can bet it will be cropping up soon.

Back to the idea that AI exists and that's what the hype and excitement are about. Here is a list of discussions among AI developers and their take on the potential future of AI realization. I say potential future because there's is not a single developer that thinks that AI already exists - none. It's a fantasy marketing trigger word being used to shape a desire for a product that doesn't exist.

1,700 expert opinions

https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

No, Artificial Intelligence doesn’t exist (yet)

https://towardsdatascience.com/no-artificial-intelligence-doesnt-exist-yet-3318d83fdfe8

Artificial intelligence really isn’t all that intelligent

https://www.infoworld.com/article/3651357/artificial-intelligence-really-isnt-all-that-intelligent.html

AI Doesn’t Actually Exist Yet

https://blogs.scientificamerican.com/observations/ai-doesnt-actually-exist-yet/

 

Online tools like ChatGPT can possibly be useful in the future. They aren't "intelligent" and aren't designed to be but they do provide a glimpse of an alternative potentially more effective search engine model than what we have been using since the first web search engines in the early 90's. That could be a very good thing for researchers that currently have to wade through 90% sponsored content to find potential nuggets of real information. That will probably be seen as an great advancement until the majority adopt the new system and it's monetized once again. :blink:

Barry

  • Thanks 1
Link to comment
Share on other sites

Here is the new Bing Chat AI.  It wants to keep your eye balls on the site and have you click on their ads to make money.  That is the basic internet model that will be with us for a while.

What can the new Bing chat do? - Search

  • Like 1
Link to comment
Share on other sites

40 minutes ago, mn90403 said:

What about its use with investments?  It has been said that many market swings are the result of trading programs.  Are these trading programs to now be designed and administered by AI?

Yeah the Wall Street trading bots are definitely AI already.

Actually last night, among the many things I tried, one was tasking GPT3.5 with turning $1000 to $10000 in the fastest possible way. After a bunch of iterative hemming and hawing, it determined the fastest way would be trading highly volatile assets like options, crypto, or forex.

It then set itself on the task of determining an algorithm to automate trading, and then decided to create it's own machine learning algorithm to predict price movement and how to make money off it.

At that point I reset it and told it to find a website where I could put $100 in real money and let it trade for me using the concepts it was creating. But even GPT3.5 is restricted, and said OpenAI has rules preventing it from doing many of these things. But then it set out how to figure out a way to try to circumnavigate it's own rules in order to achieve my goal. 😄 It failed, and rain into an iterative loop where it was unable to solve my task via any method that wouldn't violate OpenAI's rules.

Kinda crazy though. I mean clearly people are doing this with their own AI's without rules already. I'm just a random prospector who decided to see what'd happen when pointing it towards finance. There are people/companies who do nothing but finance. This stuff absolutely is, and has been running in the background beyond the known "trading/arbitrage" bots that we already knew existed.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...