Thursday, April 23, 2026

Finding the Connection Between Philadelphia's Proposed Rideshare Tax and the Funding of Public Schools

As reported in numerous stories, including this one, the Mayor of Philadelphia has proposed raising an earlier proposed 20-cent tax on a rideshare to a $1 tax. The revenue would be used to fund Philadelphia's public schools. When I tried to learn the justification for using a tax on rideshares to fund the schools, all I could find is that it would raise millions of dollars for a school system in desperate need of funding.

Debate seems to focus on whether the rideshare companies would pay the tax or whether it would be passed along to passengers. When a city council member pointed out that the legislation required the rideshare companies to collect the tax from passengers, the city's Finance Director argued that the companies "can then decide how they want to pass that on... increased their fee or hold riders harmless."

What I haven't discovered is any discussion about the justification for charging rideshare companies or rideshare passengers a tax to fund public schools. Do rideshare companies and their passengers obtain a greater benefit from the public schools than do other companies and other individuals? Do rideshare companies and their passengers impose a greater burden on the public schools than do other companies and other individuals? I cannot find anyone advocating this proposed tax addressing these questions.

Readers of MauledAgain know that I had similar objections to the city's soda tax. As I wrote in Will a Revisit of Philadelphia’s Soda Tax Bring Me an Invitation?, " As I pointed out in some of my commentaries, if the soda tax was truly about improving health, the tax should and would be a tax on sugar and similar substances and not just beverages. Instead, the tax is about revenue, and the large amount of sugary beverages sold in Philadelphia was, and to some extent still is, low-hanging tax revenue fruit."

The proposed rideshare tax strikes me as another way to collect revenue from an easy target despite the lack of a link between those paying the tax and what it funds. It will negatively impact rideshare passengers, either directly or indirectly, especially those who use rideshare services because they cannot afford to own their own vehicle. It will tempt the city to raise the tax until it begins encouraging rideshare companies or their drivers to pull out of the city.

Sunday, April 12, 2026

When "Death and Taxes" Meet AI

Ten days ago, in Using AI When Preparing Tax Returns: Avoid the Trap, I explained why I am not a fan of AI. I consider AI not ready for prime time when it comes to analysis that requires wisdom, judgment, real-life experience, critical thinking, or common sense. Perhaps it will never be ready for prime time. I based my reticence not only on my personal interactions with AI but also on careful consideration of how AI accumulates the data it uses in its processing. AI engines use data scraped from the internet. It fails miserably when it comes to distinguishing between factually correct and factually incorrect data. It doesn't do a good job deciding whether a word in one clump of data should be matched with the same word in another clump of data.

In my commentary ten days ago, I wrote in the context of taxation. I pointed out that AI doesn't do a good job distinguishing between, for example, the text of a Code section published on the internet in 2015 that is still good law and the text of a Code section published on the internet in 2015 that is no longer good law because it has been repealed, amended, or otherwise set aside.

I did not close the door on the possibility that someday AI will function with the critical thinking skills, wisdom, judgment, experience, and perspective that expert humans bring to the table. Unquestionably, that day isn't today. That day may never arrive. Though "AI" is probably useful with simple tasks that computers do well, such as computation or mere data compilation without analysis or judgment, it isn't ready to answer questions that can be answered properly only with the skills that "AI" does not possess.

This evening I was made aware of the threat that AI poses to situations far closer to life and death than taxes. According to a Nature.com article, a team at the University of Gothenburg led by a medical researcher invented a skin condition that they called bixonimania. They described it as eyelids turning slightly pink when they are rubbed when eyes become sore and itchy from staring at screens and being bombarded by blue light. The team uploaded two fake studies to a preprint server. The team did this to see how AI would handle false information. Very quickly AI platforms treated the invented diseases as real. This caused the fake studies to be cited in other literature, demonstrating that researchers are using AI results without checking the cited studies.

Worse, the team filled its publications with obvious hints and clues that they were posting fake studies. The lead author was a fake person with the name of Lazljiv Izgubljenovic, identified as working at a university called Asteria Horizon University in Nova City, California. Both the university and the city do not exist. One of the published studies tanked "Professor Maria Bohm at the Starfleet Academy." Both papers attributed funding to “the Professor Sideshow Bob Foundation for its work in advanced trickery, . . . part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad.” The papers included statements that "this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group.”

Anyone who doubts "AI" simply grabs, mixes, and regurgitates what it finds without subjecting its accumulated data to scrutiny characterized by wisdom, judgment, experience, critical thinking, and common sense now has all the proof necessary to indicate the danger posed by "AI," even setting aside the explosion in environmentally risky data centers needed to process this massive too-often-reckless movement of data.

The Nature.com article gives examples of how various AI platforms are handling this fake disease bixonimania. Take a look. It's frightening. One platform's spokesperson replied to an inquiry about the platform's treatment of the fake desease as "an emerging term," with this statement: We don’t claim to be 100% accurate, but we do claim to be the AI company most focused on accuracy.” Is that a phrase tax return preparers should post on their web sites and add to their sign on the storefront window?

As risky as it is for taxpayers and tax return preparers to ask AI to prepare or help prepare tax returns, it is even riskier for individuals and medical professionals to rely on AI platforms when diagnosing symptoms. Though the University of Gothenburg research team generated misinformation for purposes of testing AI platforms, there are millions of individuals carelessly or deliberately publishing false information. If it gets published, it gets scraped by AI engines. So beware. Beware of fake tax law, fake medical illnesses, fake weather reports, fake this, that, and the next thing. Death gets lumped with taxes in that "death and taxes" idiom, but the damage done by AI when it comes t taxes pales in comparison to the danger it poses when life and death are in play. Research. Think. Question. Analyze. Verify. Review. Beware. It's not just your tax return that is at risk.

Thursday, April 02, 2026

Using AI When Preparing Tax Returns: Avoid the Trap

The headline I asked ChatGPT for tax help—experts say I fell into a classic trap almost says it all. The story elaborates on what happened to the author of the story.

The author is one of many taxpayers who prepare their own returns. For years he had a rather simple return, because his income was reflected on a W-2, and he had nothing else happening in his life that would have created any complexity on the return. But in 2025 he purchases some employer stock through an employee stock purchase plan, and then sold most of the stock to generate cash for his wedding expenses.

Knowing that the tax rules for the sale of employer stock aren't very simple, he turned to ChatGPT. He reports that his first question brought an answer as to how the sales are treated for tax purposes, broken down into "digestible bullet points." He uploaded his Form 1099 from the brokerage firm that processed his stock sales. ChatGPT then told him that he needed to use a number different from what the brokerage reported, though it isn't clear from the article whether that number was gross sales, net sales, basis, or something else. He was instructed to examine his "last few W-2s to see that they included a certain line item." He was also told by ChatGPT, "“This is a very simple return with one stock plan wrinkle. You do NOT need a CPA.”

Fortunately, at that point he contacted a CPA that he knew. The CPA, after being brought up to speed, concluded that the author "had gotten possibly correct but also incomplete information." The analysis of the W-2 forms was "quite important" in determining if the brokerage indeed was using the correct numbers. It turned out that, according to the CPA, some of the numbers in the Form 1099 seemed to indicate that the author had engaged in transactions that he "may not have made." The CPA suggested that ChatGPT has not provided any information about this issue probably because the author had not asked. A high quality tax professional would know to provide information that is relevant even if the client fails to ask, because most clients don't know that they need to ask or what to ask.

The author concludes that the advice he obtained from ChatGPT "seemed so sound and was so breezily delivered that [he] was ready to file and risk having made a mistake." The author spoke with an accounting professor who shared, "AI will convince you that the sky is green. It is so convincing. The professor cited a time when a chatbot incorrectly answered one of the tax questions he gives his students. He continued, “It gave me this response that the mechanics were perfect, but I had to take a step back and say, ‘Well, you’re wrong.’”

An AI strategy company founder informed the author, “But by default, large language models are trained to be helpful assistants. Oftentimes you’re going to run into hallucinations.” Hallucinations is a fancy word for wrong answers.

The author's conclusion? He sums up advice from experts: "[Y]ou’d be wise to tread very carefully before using AI to help with your taxes." He quotes the AI strategy company founder, “If you make a mistake while using AI to do your taxes, it could get you in trouble with the IRS. And a valid excuse isn’t, ‘The AI made me do it.’” So true.

The author then shares what "pros say to keep in mind" when using AI as a tool to prepare tax returns. I'll let you read the article because my advice would simply be, "Don't." My experience with "AI" is that it's not ready for prime time. I asked one AI site "who is James Edward Maule?" and the answer was so amazingly incorrect I wondered if the "I" in "AI" should be changed from intelligence to ignorance. Why is AI still insufficient? AI engines use data scraped from the internet. It doesn't do a good job distinguishing between factually correct and factually incorrect data. It doesn't do a good job distinguishing between, for example, the text of a Code section published on the internet in 2015 that is still good law and the text of a Code section published on the internet in 2015 that is no longer good law because it has been repealed, amended, or otherwise set aside. It doesn't do a good job deciding whether a word in one clump of data should be matched with the same word in another clump of data. It reminds me of first-year law students who focus on the words of a case at face value rather than considering the context in which those words are being used.

Perhaps someday "AI" will function with the critical thinking skills, wisdom, judgment, experience, and perspective that expert humans bring to the table. But that day isn't today. That day may never arrive. In the meantime, "AI" is probably useful with simple tasks that computers do well, such as computation or mere data compilation without analysis or judgment. But "AI" surely isn't ready to answer questions that can be answered properly only with the skills that "AI" does not possess.