Lovex
← Back to blog
·7 min read

Structured data is the moat: why custom fields matter more in the AI era

Every project management tool stores the same five fields: title, status, assignee, due date, priority. Five fields to capture the full complexity of how teams actually work. It is like trying to describe a house with only “address” and “color.”

This worked when humans were the only ones reading the board. Humans fill in the gaps with context, memory, and hallway conversations. But AI agents do not have hallway conversations. They have exactly what is on the board — and if the board only stores five fields, the AI is working with five percent of the picture.

The metadata gap

When a team lead asks “which tasks are client-facing?” or “what is the effort level on backend work?” they are asking about metadata that exists in people's heads but not in the system. The answer requires a human to scan every task, remember the context, and synthesize.

This is why most AI features in project management tools produce generic summaries. The AI can tell you that 14 tasks are in progress. It cannot tell you that 9 of them are client-facing, 3 require design review, and 2 are blocked by a vendor dependency — because none of that information is structured.

Custom fields close this gap. Not by adding complexity, but by making implicit knowledge explicit.

Why this matters more in the AI era

Before AI, custom fields were a nice-to-have. Power users set them up, most people ignored them. The data sat in a column that nobody filtered.

With AI agents actively reading your board, custom fields become the difference between a useful assistant and a useless one. Consider the difference:

Without custom fields:“You have 23 tasks in progress across 4 projects.”

With custom fields:“You have 23 tasks in progress. 8 are client-facing (3 overdue). The backend refactor has 120 estimated hours remaining. The enterprise deal tasks are all on track for the Q2 deadline.”

Same board. Same tasks. Radically different insight — because the AI has structured data to reason about instead of guessing from titles.

What good custom fields look like

The failure mode of custom fields is Jira: 47 required fields, a configuration wizard that takes an afternoon, and a team that stops updating them within a week. The problem is not the concept — it is the implementation.

Good custom fields follow three rules:

  1. Created through conversation, not configuration.Tell the AI “I need to track which tasks are client-facing” and it creates a select field with the right options. No settings page required.
  2. Lightweight by default. Text, number, date, or dropdown. Four types that cover 95% of use cases. No conditional logic, no field dependencies, no validation rules that require a manual. Start simple, add complexity only when the team asks for it.
  3. Visible where they matter. Field values show up on the task detail, in the board context, and in AI conversations. The AI can filter, sort, and reason about them. They are not buried in a properties panel that nobody opens.

The compounding effect

Structured metadata compounds. Every field value that a team member sets makes the AI smarter. After a week of tagging tasks with “client” and “effort level,” the AI can answer questions that used to require a 30-minute status meeting:

None of these questions can be answered from title, status, assignee, due date, and priority alone. They require the structured metadata that custom fields provide.

AI agents that populate their own context

The next step is obvious: AI agents that set custom field values themselves. An agent reviewing a PR could tag the linked task with “needs design review” based on the files changed. An agent triaging incoming requests could set the effort estimate based on similar past tasks.

This creates a virtuous cycle. AI reads structured data to make better decisions. AI writes structured data to improve future decisions. The board gets richer without anyone manually filling in fields.

But it only works if the custom fields exist in the first place. You cannot automate the population of metadata that your tool does not support.

The real competitive moat

Features are easy to copy. Structured data is not. A team that has been tagging tasks with custom metadata for six months has a richer, more queryable project history than a team that just started. That history makes their AI more useful, which makes them more productive, which creates more structured data.

This is the flywheel that separates teams that get real value from AI project management from teams that get generic summaries. It is not about having a smarter model — it is about having better data for the model to work with.

In Lova, custom fields are a conversation away. Tell the AI what you need to track, and it creates the field. Set values from the task detail or let the AI populate them. Every field value makes the next AI answer more precise. That is how project management tools should work in the AI era — not more configuration, but more context.

Project management that works the way you think

Lova is a conversation-first workspace. Tell it about your project, it handles the rest — tasks, boards, assignments, and status updates. No setup, no training.

Keep reading