View Single Post
  #3  
Old 08-09-2021, 08:22 AM
Rick-Rarecards Rick-Rarecards is offline
member
 
Join Date: Aug 2021
Posts: 9
Default

Quote:
Originally Posted by Peter_Spaeth View Post
I think the discussion we were having yesterday is likely to be buried in a thread nominally about something else entirely, so I am starting a new one. Hopefully the posters who weighed in on its deficiencies yesterday will do so here or reproduce their posts, I wasn't comfortable doing that. And hopefully any advocates will weigh in as well. My partially formed opinion based on what I've read here and elsewhere, and heard, is that this technology is a long way from being ready for serious use. And that it isn't likely to help with the problems we all know about, at least any time soon.
Pasting the information from yesterday:

I think of three questions AI/ML could help with
1) Detect if a card is real or fake
2) Classify the card (type, year, etc)
3) Classify the grade

In all of these cases, I can assure you people want to know why the algorithm gave the grade/class/ etc, e.g. explain how the algorithm got the result. This requires explainable AI, which is beyond what algorithms can do today. Furthermore, all of this requires a large training set (you need a lot of examples) including fake examples! Who has that many training examples sitting around? Not to mention the level of fidelity needed.

It is a very long discussion but I will try to give you a 30,000 ft view. You can create 1-3, but they would be very limited. There are technological limitations as well as practical limitations.

The easiest to understand are the practical limitations. So yes, if you can't explain the results the tools are useless. How crazy would the industry be if you received the following letter: "Dear Sir/Madam, our software has determined that your card has a 51% chance of likely being fake. Therefore, we are unable to certify it thank you for using our services."


The reason we can't explain the results are a technical limitation. Current AI/ML is a "blackbox" approach. You have an algorithm and train it on examples. Let's say I was creating an AI/ML tool to do 1) detect if a card is real or not. You basically show the tool a bunch of labeled examples so fake and real cards. It creates its own internal method to determine if a card is fake or real. You then test it on a bunch of cards that it has never seen before and compare its results to graders. If it does a good job you are good to go!

So where do the issues come from? Well if the algorithm has never seen a certain color, or a certain name before, never seen a type of error, there is a weird fleck of dust etc. Characteristics of cards that never existed in the training set (have you seen those cards that had a piece of fabric on them). So, you say well if it encounters something its never seen before it should tell someone to inspect the card! Well, that is an even more complicated problem (anomaly detection). Plus, it can't tell anyone what it didn't understand about the card that broke it (explainable AI). You might even say, well let's jus show it everything that has ever been graded before. This might cause something called overfitting, your algorithm is so fine tuned and specific that it will throw out anything not in its training.

It gets complicated the more you think about it. So this is essentially one of many problems just for the arguably easiest of the 3 problems.


So what could todays AI/ML do for detecting a fake card?

I will give you a possible system for 1) detecting a fake card. Assume that the industry agreed on a set of descriptors for how one would fake a card, categories as you will. I'm not familiar with all of the ways to create a fake card so apologies for the limited list: So we could say, 1) Reprint (passing off as original), 2) Washed old card and reprinted, 3) New print, 4) etc.

One could run an algorithm to first tell you if the card is fake or not. Then you could either do the following: have another tool tell you which of the categories is most likely (so pick the single top reason), or it could just give you the likelihood that it thinks the card falls in each of the categories, or you could have individual algorithms for each of the categories and have the system give a probability individually.

Again, these will all be blackbox answers. It won't give you the reason why it picked a certain category over another. It won't tell you which card was washed, AI/ML is not magic! The more detail you want, the more fine tuning hand crafted algorithms you need. Let's not forget, there are always new methods for faking cards so you would have to keep adjusting your algorithms and this means some fakes will always make it through.
Reply With Quote