Heard Nutrition

Beta
Back to app

A nutrition logger with bigger ambitions.

Today you can type or speak a meal, review the parsed items, and get a nutrition breakdown backed by USDA data. That is the first layer. The larger goal is a more reliable nutrition system that can support everyday logging, creator-led meals, coaching workflows, and other products that need deterministic food data.

What Heard is currently

A meal logger built around reviewable nutrition logic.

Heard takes a natural-language meal description, breaks it into parts, matches those rows against the USDA FoodData Central database, and scales the nutrition to the serving you actually ate. The point is more than just speed and convenience. It is that the result stays inspectable. You can see what matched, what counted, what still needs input, and where the system is being cautious.

That makes Heard different from a generic AI calorie guess. We aim to become a dependable nutrition layer instead of a response that sounds confident.

How it works

One meal description goes through a full review flow.

  1. Type or speak a meal the way you would naturally describe it.
  2. The app parses that input into foods, quantities, and relevant modifiers.
  3. Each food is matched against the current dataset and scaled to your serving.
  4. Review the rows, edit what needs help, and check what is included in totals.
  5. Save the meal or report issues that help improve the matching logic.
Getting good results

The clearer your input, the better the output.

Strong inputs

  • Lunch was 170 g 0% Greek yogurt, 15 g honey, and 30 g blueberries.
  • Just ate 2 large eggs, 2 slices whole wheat toast, and 1 tbsp butter.
  • 150 g extra lean ground beef, 200 g potatoes, and 75 g broccoli.

Harder to match

  • Lunch was a bowl of greek yogurt, with some honey and blueberries
  • Just had a couple of eggs, couple pieces of toast, and a spoonful of butter
  • Had a bowl of ground beef, potatos, and broccoli

Grams are still the strongest input when you have them. If you do not, household measures like cups, slices, tablespoons, etc will still work well. Brand and preparation matter when they materially change the food.

Beta

This beta is both usable product and training run.

Heard currently relies on USDA data only. That gives the app a strong public nutrition source, but it also means coverage is mixed in some areas. Some branded foods, niche products, and very specific preparations still need better dataset support.

  • Some foods will need manual edits, especially vague or branded items.
  • USDA-only coverage means some results are stronger than others today.
  • Rows with unclear quantities may be flagged instead of being over-guessed.
  • Real user inputs from this beta are being used to tighten the algorithm and expose weak spots.

So the beta is not just early access. It is also how Heard learns where the current system is strong, where more datasets are needed, and which product features matter most in real use.

Where this is headed

Meal logging is the entry point, not the full ambition.

The longer-term goal is to turn Heard into a nutrition system that can do more than count macros in isolation. That includes broader data coverage, stronger deterministic matching, better creator and coach workflows, and eventually a more social layer around meals people actually want to share, save, reuse, and learn from.

Over time, the same engine should be useful beyond this app too. The aim is a reliable nutrition backend that can support other products, research workflows, and practical consumer tools that need verified food data instead of loose estimates.

Feedback

If something feels off, that is useful signal.

If a match is wrong, a serving looks strange, or the product flow feels confusing, let us know. That kind of real-world feedback is exactly what helps turn this from a promising beta into a stronger nutrition engine.