Fund marketers shouldn’t rely on A.I. solutions yet

Artificial Intelligence remains risky for professional use

Artificial Intelligence (AI) is one of the most powerful tools available to the public. It leverages the information of millions of users and contributors, constantly learns based on the queries and questions it receives and translates all that data into composed results. It’s a digital genie, but is that genie ready to be let out of the bottle when it comes to fund marketing?

The short answer is no, and there are 4 reasons why AI is not worth the hassle or investment yet.

  1. ONLY AS SMART AS THE INTERNET AND THE PROGRAMMER

    To be clear, AI does not presently generate new ideas and produce content on your behalf. Instead, AI uses a high-level information aggregation algorithm to scours the public-facing side of the internet and any assigned databases, cross-references the data it it finds with the request given, and then organizes that data in a semi-coherent manner (how coherent depends on how well the AI can mimics human-generated content). In short, it is copy/paste on crack.

    The majority of the internet’s free content is for recreational consumption, and high-level content – academic journals, news articles, research studies, databases, etc. – are behind paywalls and login gateways. This means the information available for an AI to leverage, and thus the quality of the results it can provide, is based on the lowest-valued information available.  

    Unless world-renowned fund managers, brokerage firms, and SEC experts wants to start generating thousands of hours of relevant content and publishing them for free, AI simply lacks the necessary information to produce worthwhile, expert-level content. Funds should lean on their own experience and knowledge to market themselves, not low-hanging internet content that doesn’t actually showcase a fund manager’s expertise ors specifically apply to the fund itself.
  2. AI CONTENT IS SPAM ACCORDING TO GOOGLE

    Google’s webmaster guidelines define autogenerated content of any kind, even that produced with high-level AI algorithms, is deemed spam. This is coming from Google, who has their own AI, Bard. This isn’t to say that autogenerated content doesn’t have a purpose; it means that autogenerated content does not produce unique or value-laden results, as it is a regurgitation or translation of what was already there. 

    Take Google Translate or Grammarly for example. These tools use machine-learning and databases to take an input and autogenerate an output based on the information available to them. Sometimes they are spot-on. Sometimes they give you half a dozen results and rely on you to manually determine the right result (which they record and use to improve their results in the future), other times they are limited by their own data and cannot produce viable results at all. AI falls into the same category.

    Now AI is improving constantly, but the same technology that brings AI closer to actually producing unique, value-laden results is also being used to help track and mark information on the internet as aggregated, false, and spam. The last thing a fund manager wants is to build its content on information that shortly thereafter gets marked as autogenerated or “not written by a human.”
  3. A COPYRIGHT, LIBEL, AND LIABILITY CONCERN

    How often do users of AI request to have all aggregated information properly cited based on where the related data was found? Are prompts being used that specify attribution and copyright and trademark be properly represented? Were cross-reference and fact-check elements included to ensure that the results are accurate according to multiple sources? Did SEC requirements or other industry-specific terms factor into the requests?

    AI is simply a tool working on someone’s behalf. It is not required to meet your standards or regulations or ethics or best practices, and even if you ask it to do so in a prompt, it can only do what is within the scope of its algorithm and programming and the information it accumulates. It spits out information without consequence, but that is okay because the result is for the user only…until the user shares it. At that moment, the infringement, misappropriation, libel, straight factual inaccuracies, and whatever other problems the content carries become real, and the responsibility for all of that becomes the users.

    The reality is that it takes a lot of work to properly request content from AI in a way that attempts to cover all forms of liability, best practice, and regulation. Then you have to audit the content after the fact to make sure every potential pitfall or liability concern is addressed. Then you are still beholden to the information at the AI’s disposal, which may or may not be properly sourced and accurate and everything else.
  4. DESIGN IS ABOUT CONSISTENCY AND BRAND, NOT POPULARITY
    If the intent is to use AI in a manner that skirts content concerns, then that leaves automated design. But what does an AI design tool really do? AI is still limited by the information that is available to it, meaning the constructs of whatever database, software, algorithm, or system are in place. It uses that and publicly accessible knowledge to help construct what is visually going to match the given request.

    AI algorithms work with what is popular and aligns with the prompt, then allowing subsequent alterations to the prompt to personalize the result. It relies on the user to make each piece consistent, otherwise the various marketing materials just become pretty but standalone pieces. Otherwise, it defaults to what is popular. However, it doesn’t take into consideration accessibility, best practices, software limitations, and a variety of other vital design concepts; it makes that one thing look good.

    A fund’s brand is not a collection of pretty things. It is a uniform presentation of what that fund represents, with materials that are in alignment and meant to aid fund managers. What good are marketing and sales materials that are inconsistent or all about shock and awe and do not properly present the vital content needed to actually convert prospects or maintain investors?

At present, AI lacks the finesse to represent funds properly and presents too many forms of liability. What a fund might save in cost for content creation and design it will likely have to spend on copyeditors and lawyers, and in the end the result will not be expert quality. Doing the work in-house or leveraging a professional agency will produce a better result long-term and prove to be worth the investment.


Disclaimer: It is only a matter of time before AI is optimized in a manner that addresses some, if not all, of its shortcomings. For that reason, we intend to follow up on this topic to help determine if and when AI for fund-marketing is viable.