Skip to main content
TjMakeBot Blogtjmakebot.com

The Future Is Here: The Next 10 Years of AI Labeling Tools

TjMakeBot TeamTrend Outlook10 min
Trend OutlookFuture Predictions
The Future Is Here: The Next 10 Years of AI Labeling Tools

Introduction: Imagining the Future

From LabelImg in 2015 to TjMakeBot in 2025, data labeling tools have undergone tremendous change. A decade ago, labeling a single image required manually drawing every bounding box, and an annotator could process at most a few hundred images per day. Today, with AI assistance, the same workload can be completed in minutes.

The pace of this transformation is remarkable. Looking back at the past decade:

  • 2015-2018: Open-source tools like LabelImg emerged, moving labeling from the pen-and-paper era into the digital age
  • 2018-2021: Cloud-based labeling platforms rose, enabling team collaboration and improving labeling efficiency by 3-5x
  • 2021-2024: AI-assisted labeling technology matured, with auto-labeling and intelligent recommendations becoming widespread
  • 2024-2025: Large language models merged with vision models, making conversational labeling a reality

Standing at the beginning of 2026, we have every reason to believe that the next 10 years of transformation will be even more profound. As AI evolves from an "assistive tool" to an "intelligent partner," and labeling shifts from "tedious labor" to "human-AI collaboration," the entire industry will undergo unprecedented change.

Today, let's look ahead at the next 10 years of AI labeling tools and explore the technology trends and innovations that are about to reshape the industry.

Trend 1: Smarter AI

Artificial intelligence is evolving at an exponential pace, and this evolution will profoundly change how data labeling works. Future AI will no longer be a simple "tool" but a true "intelligent partner" that understands data, tasks, and user intent.

Development directions:

  • Multimodal AI: Unified processing of image + text + audio

    • Imagine this: you upload a photo of a factory floor, and AI not only identifies equipment, products, and personnel, but also understands "this is an automotive parts production line," and can even comprehend "help me label all areas with potential safety hazards"
    • Multimodal fusion gives AI "full-sensory" understanding, and labeling is no longer limited to a single dimension
  • Active learning: AI proactively selects data that needs labeling

    • Traditional approach: Annotators process every image one by one, spending a lot of time on "easy" samples
    • Future approach: AI automatically filters out "valuable" samples -- those where the model is uncertain, boundaries are ambiguous, or data is representative
    • This means: achieving 90% training effectiveness with just 10% of the labeling effort
  • Automatic quality assessment: AI automatically evaluates labeling quality

    • Real-time detection of labeling errors: bounding box offset, category confusion, missed targets
    • Intelligent consistency checks: ensuring uniform labeling style for similar targets
    • Automatic quality reports: labeling accuracy, consistency scores, issue distribution
  • Zero-shot learning: Label new categories without training

    • Tell AI "this is a screwdriver," and it can find screwdrivers in all images
    • No need to collect training data, no waiting for model training -- instant use
    • This will completely solve the "cold start" problem

Expected timeline: 2027-2030

Impact:

  • Labeling efficiency improved 10-20x
  • Labeling costs reduced by 90%+
  • Labeling quality significantly improved

Real-world case outlook: An e-commerce company needs to label 1 million product images across 5,000 categories. Under the traditional model, this would require a 50-person team working for 3 months. In the future, AI will automatically identify 95% of products, the active learning system will filter out 50,000 "key samples" for human review, and the entire process might only require a 5-person team working for 1 week.

Trend 2: More Natural Interaction

Keyboards and mice have dominated human-computer interaction for 50 years, but this interaction paradigm is being disrupted. Future labeling tools will support more natural, intuitive interaction methods, making labeling as simple as everyday conversation.

Development directions:

  • Voice interaction: Labeling through voice commands

    • "Mark the red object on the left as a fire hydrant"
    • "Label all pedestrians in this image"
    • "Change the last label's category to bicycle"
    • Voice interaction frees up hands, allowing annotators to handle multiple tasks simultaneously, significantly improving efficiency
  • Gesture control: Labeling through gestures

    • Finger circling: Draw a circle on a touchscreen or in the air to automatically generate a bounding box
    • Pinch to zoom: Precisely adjust labeling area size
    • Swipe to switch: Quickly browse and switch between images
    • Combined with depth cameras, gesture recognition will become even more precise
  • Augmented Reality (AR): Labeling in AR environments

    • Put on AR glasses and directly "see" objects that need labeling in the real world
    • Tap objects in the air with your finger to complete labeling
    • Especially suited for 3D scene labeling, autonomous driving data collection, and industrial on-site labeling
    • AR labeling will break the "screen" barrier, bringing labeling into the real world
  • Brain-computer interface: Labeling through thought control (long-term)

    • This sounds like science fiction, but companies like Neuralink are making it a reality
    • Imagine: looking at an object, thinking "this is a car," and the labeling is done
    • While commercial use is still far off, this represents the ultimate form of human-computer interaction

Expected timeline: 2028-2032

Impact:

  • Learning costs reduced by 90%+
  • Interaction efficiency improved 5-10x
  • User experience significantly enhanced

Scenario outlook: A quality inspection engineer wearing AR glasses walks along a production line, spots a defective part, and says "defect, scratch." The system automatically takes a photo, labels it, and records it. The entire process takes less than 2 seconds, while the traditional approach requires: stopping, pulling out a phone, taking a photo, opening labeling software, manually labeling, saving -- at least 30 seconds.

Trend 3: Platform-Based Development

Data labeling is no longer an isolated step but a critical link in the AI development pipeline. Future labeling tools will evolve into comprehensive AI platforms, connecting the entire chain from data to model to application.

Development directions:

  • Labeling + Training + Deployment integration: One-stop AI platform

    • Complete everything on a single platform: data upload -> intelligent labeling -> model training -> performance validation -> one-click deployment
    • No need to switch between multiple tools, no complex data format conversions
    • Time from "having data" to "having a usable model" shortened from weeks to hours
    • Lowering the AI development barrier so SMEs can easily build AI applications
  • Dataset marketplace: Buy and sell datasets

    • High-quality labeled data will become tradeable digital assets
    • Companies can sell de-identified labeled data for additional revenue
    • Startups can purchase ready-made datasets to quickly launch AI projects
    • Data pricing, copyright protection, and quality certification will form a complete ecosystem
  • Model marketplace: Buy and sell trained models

    • A trading platform for pre-trained models, fine-tuned models, and industry-specific models
    • Model as a Service (MaaS): Pay per API call, no need to train your own
    • Model composition: Combine multiple specialized models into complex AI pipelines
    • Model version management, performance comparison, and A/B testing all in one place
  • Service marketplace: Buy and sell labeling services

    • Professional labeling teams join the platform to offer labeling services
    • On-demand hiring: No need to build a dedicated team for temporary projects
    • Quality assurance: Platform provides quality monitoring and dispute resolution
    • Global collaboration: Cross-timezone, cross-language labeling team coordination

Expected timeline: 2027-2030

Impact:

  • Simplified workflows
  • Significantly reduced costs
  • Significantly improved efficiency

Business model transformation: Under the traditional model, an AI project requires: purchasing labeling tools + building a labeling team + buying training servers + deployment and operations. In the future, all of this will be completed on a single platform, with pay-as-you-go pricing and elastic scaling. This will give rise to a new business model -- "AI capability as a service," where companies only need to focus on business innovation while the platform handles the technical infrastructure.

Trend 4: Industry-Specific Solutions

General-purpose tools are giving way to deep vertical industry specialization. Every industry has unique data characteristics, labeling standards, and quality requirements, and the future will see more specialized labeling tools optimized for specific industries.

Development directions:

  • Autonomous driving tools: Optimized for autonomous driving

    • Support for LiDAR point clouds, multi-camera fusion, HD maps, and other complex data types
    • Built-in traffic scene semantics: lane lines, traffic signs, pedestrian intent prediction
    • Temporal labeling: cross-frame object tracking, trajectory prediction, scene understanding
    • Quality management processes compliant with ISO 26262 and other automotive safety standards
    • Seamless integration with mainstream autonomous driving simulation platforms (CARLA, LGSVL)
  • Medical imaging tools: Optimized for medical imaging

    • Support for DICOM, NIfTI, and other medical imaging formats
    • Built-in anatomical knowledge: organ segmentation, lesion detection, image registration
    • 3D volumetric labeling: CT/MRI 3D reconstruction, organ volume measurement
    • Compliant with HIPAA, GDPR, and other medical data privacy regulations
    • Integration with PACS systems and electronic health record systems
    • Expert consultation: Multiple doctors collaborating on labeling difficult cases
  • Industrial quality inspection tools: Optimized for industrial QC

    • Support for high-resolution industrial cameras, X-ray, infrared thermal imaging, and other data sources
    • Built-in defect classification system: scratches, dents, cracks, color deviations, dimensional deviations
    • Pixel-level precise labeling: micrometer-level defect detection
    • Integration with MES and ERP systems for quality traceability
    • Support for online learning: rapidly incorporating new defect types into detection models
  • Retail and e-commerce tools: Optimized for retail and e-commerce

    • Product attribute labeling: color, material, style, brand recognition
    • Scene understanding: shelf display analysis, customer traffic heatmaps
    • Multi-language support: cross-border e-commerce product description labeling
    • Integration with mainstream e-commerce platform APIs
    • Fashion trend analysis: trending element identification, outfit recommendations

Expected timeline: 2026-2029

Impact:

  • Increased specialization
  • Improved efficiency and quality
  • Lower industry barriers

The value of industry specialization: General-purpose tools are like a "Swiss Army knife" -- they can do everything but excel at nothing. Industry-specific tools are like a "scalpel" -- achieving perfection in a specific domain. A medical imaging tool might have hundreds of built-in disease labeling templates, thousands of anatomical terms, and tens of thousands of expert-labeled reference cases -- things that general-purpose tools simply cannot provide.

Technical Development Directions

AI labeling tools show the following technical development trends:

Current state:

  • AI-assisted labeling technology is gradually maturing
    • Deep learning-based object detection models (YOLO, Faster R-CNN) can handle common scenarios
    • Semantic segmentation models (U-Net, DeepLab) are widely used in medical, remote sensing, and other fields
    • Large language models (GPT, Claude) are beginning to be applied to text labeling and conversational interaction
  • Tool functionality continues to improve
    • Support for multiple labeling types: bounding boxes, polygons, keypoints, semantic segmentation
    • Mature team collaboration features: task assignment, progress tracking, quality review
    • Enhanced data management capabilities: version control, format conversion, data augmentation
  • User experience continues to be optimized
    • More intuitive interface design, lower learning curve
    • Keyboard shortcuts and batch operations improve efficiency
    • Cloud deployment lowers the barrier to entry

Future trends:

  • AI-assisted labeling will become more widespread
    • From "optional feature" to "standard feature"
    • AI pre-labeling accuracy will improve from 80% to over 95%
    • The human role will shift from "annotator" to "reviewer"
  • Tool functionality will become more comprehensive
    • Support for more complex labeling types: 4D spatiotemporal labeling, causal relationship labeling, knowledge graph labeling
    • Increased intelligence: automatic task assignment, smart quality alerts, anomaly detection
    • Deep integration with the full AI development pipeline
  • User experience will continue to improve
    • Natural language interaction becomes mainstream
    • Personalized interfaces and workflows
    • Full support for mobile, AR/VR platforms

Changes in Application Structure

Changes in application methods:

  • AI-assisted labeling is becoming increasingly widespread

    • 2020: About 20% of labeling projects used AI assistance
    • 2025: About 60% of labeling projects use AI assistance
    • 2030 (projected): About 95% of labeling projects will use AI assistance
    • AI assistance will shift from "efficiency tool" to "necessity"
  • Manual labeling is primarily used for review and complex scenarios

    • Simple scenarios: AI completes automatically, humans spot-check
    • Medium scenarios: AI pre-labels, humans fine-tune
    • Complex scenarios: Humans lead, AI assists
    • Extreme scenarios: Purely manual (e.g., subjective judgment, creative labeling)
  • Tool platform functionality continues to expand

    • From single labeling tool -> data management platform -> AI development platform -> industry solutions
    • Functionality boundaries keep expanding, ecosystem continues to mature
    • Open APIs and plugin systems support customized extensions

Evolving User Needs

As AI technology develops and application scenarios expand, user demands for labeling tools are constantly evolving. From the initial "just make it work" to today's "fast, good, and affordable," user expectations keep rising.

Need 1: Higher Efficiency

Current: 80% improvement in labeling efficiency Future: 95%+ improvement in labeling efficiency

How to achieve it:

  • Smarter AI

    • AI pre-labeling accuracy improves from 80% to 98%, humans only need to fine-tune
    • Smart recommendations: automatically suggest the most likely labels based on context
    • Batch processing: one-click batch labeling for similar images
    • Template reuse: labeling template library for common scenarios
  • More natural interaction

    • Voice commands replace mouse clicks, freeing up hands
    • Gesture operations enable quick selection and adjustment
    • Eye tracking assists in locating target areas
  • More automated workflows

    • Smart task assignment: automatically assign tasks based on annotator expertise
    • Automatic quality checks: real-time detection of labeling errors with instant feedback
    • Pipeline operations: seamless labeling, review, and correction workflow

The real significance of efficiency improvement: Suppose a project requires labeling 100,000 images. At the traditional rate of 30 seconds per image, the total time would be 833 hours (about 35 days at 24 hours per day). A 95% efficiency improvement means only 42 hours are needed -- less than 2 days. This isn't just time savings; it's a business model transformation -- rapid iteration and agile development become possible.

Need 2: Lower Costs

Current: 90% cost reduction Future: 95%+ cost reduction

How to achieve it:

  • Free tools become widespread

    • Open-source tools are increasingly feature-rich, meeting most basic needs
    • Cloud-based free versions lower the entry barrier
    • Community-driven plugin ecosystems provide rich extensions
  • AI automation improves

    • AI handles simple tasks, humans focus on complex scenarios
    • Active learning reduces the amount of data that needs labeling
    • Transfer learning reuses existing models, reducing the cost of starting from scratch
  • Platform-based cost reduction

    • Pay-as-you-go, no large upfront investment needed
    • Shared infrastructure, distributed maintenance costs
    • Economies of scale drive decreasing marginal costs

Changes in cost structure:

Cost Item 2020 2025 2030 (Projected)
Tool procurement 30% 15% 5%
Labor costs 60% 50% 20%
Computing resources 10% 25% 15%
AI services 0% 10% 60%

Need 3: Higher Quality

Current: 95%+ labeling accuracy Future: 99%+ labeling accuracy

How to achieve it:

  • AI quality assessment

    • Real-time detection of labeling anomalies: bounding box offset, category errors, missed targets
    • Consistency checks: ensuring uniform labeling style for similar targets
    • Confidence scoring: flagging uncertain labels for priority human review
  • Automatic error correction

    • Rule-based automatic corrections: bounding box alignment, label normalization
    • AI-based intelligent corrections: inferring correct labels from context
    • Cross-validation: automatic comparison of multi-annotator results to identify discrepancies
  • Continuous learning

    • Labeled data feeds back into AI models, creating a positive feedback loop
    • Error case library accumulates, preventing repeated mistakes
    • Industry best practices are captured, continuously raising standards

The business value of quality: Labeling quality directly impacts AI model performance. Research shows that every 1% improvement in labeling accuracy can improve model performance by 2-5%. For safety-critical fields like autonomous driving and medical diagnosis, the gap between 99% and 95% could mean the difference between life and death. High-quality labeling is not a cost -- it's an investment.

Technology Breakthroughs

Over the next decade, several key technology breakthroughs will fundamentally change the rules of the data labeling game. These technologies are already showing remarkable potential in laboratories and are expected to move toward commercial application in the coming years.

Breakthrough 1: Zero-Shot Learning

Current: Requires training data Future: Label without any training data

Zero-shot learning is one of the most exciting breakthroughs in AI. It enables AI to recognize object categories it has never seen before, completing labeling based solely on text descriptions.

Technical principles:

  • Leverages large-scale pre-trained models (such as CLIP, ALIGN) to learn the correspondence between images and text
  • Defines new categories through natural language descriptions, without providing any training samples
  • The model uses existing knowledge for reasoning, achieving "learning by analogy"

Application scenarios:

  • New product launches: Start labeling immediately without waiting for data accumulation
  • Long-tail categories: No need to specifically collect training data for rare objects
  • Rapid prototyping: Validate ideas without investing heavily in labeling costs

Impact:

  • Labeling costs reduced by 99%+
  • Labeling speed improved 100x+
  • Labeling barrier significantly lowered

Real-world case: A biotech company needs to label a newly discovered cell type. The traditional approach requires: collecting samples -> expert labeling -> model training -> validation, a process that could take months. With zero-shot learning, you just tell AI "this is a Y-type cell with X characteristics," and AI can automatically identify it in microscope images -- the entire process might take just a few minutes.

Breakthrough 2: Multimodal Understanding

Current: Single modality (images) Future: Multimodal (image + text + audio + video + 3D)

Multimodal AI can simultaneously understand and process multiple types of data, just like humans can "see, hear, read, and speak."

Technical principles:

  • Unified representation space: Mapping data from different modalities into the same semantic space
  • Cross-modal attention mechanisms: Enabling the model to correlate information across modalities
  • Multimodal pre-training: Learning universal representations from large-scale multimodal data

Application scenarios:

  • Video labeling: Comprehensive understanding combining visuals, audio, and subtitles
  • Document understanding: Simultaneously processing text, charts, and formulas
  • Scene understanding: Fusing images, depth, and semantics for 3D scene reconstruction

Impact:

  • More accurate labeling
  • Faster labeling
  • Smarter labeling

Technology evolution roadmap:

2020: Single-modal models (image OR text OR audio)
    |
2023: Dual-modal models (image + text, e.g., CLIP)
    |
2025: Tri-modal models (image + text + audio)
    |
2028: Full-modal models (image + text + audio + video + 3D + sensors)
    |
2030: Embodied intelligence (real-time interaction with the physical world)

Breakthrough 3: Active Learning

Current: Passive labeling Future: Proactively selecting data that needs labeling

Active learning transforms AI from "passive acceptance" to "active thinking," intelligently selecting the most valuable data for labeling.

Technical principles:

  • Uncertainty sampling: Selecting samples where the model is most uncertain
  • Diversity sampling: Selecting the most representative samples
  • Expected model change: Selecting samples that would most improve the model
  • Committee query: Multiple models vote, selecting samples with the greatest disagreement

Application scenarios:

  • Large-scale datasets: Filtering the most valuable 10,000 images from millions
  • Continuous learning: Intelligently selecting data that needs supplementary labeling after model deployment
  • Cost-sensitive scenarios: Maximizing labeling effectiveness within a limited budget

Impact:

  • Labeling efficiency improved 10-20x
  • Labeling costs reduced by 90%+
  • Labeling quality improved

Mathematical intuition: Suppose you have 1 million images to label, but your budget only covers 10,000.

  • Random selection: You might pick a lot of "easy" samples that teach the model nothing new
  • Active learning: Intelligently selects "boundary samples," where every image teaches the model something new
  • Result: Achieve 80% effectiveness with just 1% of the data

The active learning loop:

Data pool -> AI filtering -> Human labeling -> Model training -> Performance evaluation -> Update filtering strategy -> Data pool

This loop turns labeling into a continuous optimization process rather than a one-time task.

TjMakeBot's Continuous Improvement

As an innovator in the AI labeling tool space, TjMakeBot always stays at the forefront of technology, continuously providing users with a better labeling experience. Here are our development plans and commitments.

Feature Optimization Directions

User experience:

  • Continuously optimizing interaction experience

    • Simplifying workflows, reducing unnecessary clicks
    • Optimizing interface response speed for millisecond-level feedback
    • Supporting personalized interface configuration to adapt to different user habits
    • Dark mode, eye-care mode, and other theme options
  • Improving tool usability

    • Smart guidance: New users get started in 5 minutes
    • Keyboard shortcut system: Advanced users double their efficiency
    • Right-click menus: Common functions accessible with one click
    • Undo/redo: Unlimited operation history
  • Enhancing documentation and tutorials

    • Video tutorials: Complete courses from beginner to advanced
    • Interactive tutorials: Learn by doing with instant feedback
    • Best practice guides: Industry expert experience sharing
    • Community Q&A: User mutual assistance for quick problem solving

Feature expansion:

  • Support for more labeling types

    • 3D point cloud labeling: Support for LiDAR data
    • Video labeling: Temporal object tracking
    • Audio labeling: Speech recognition, sentiment analysis
    • Document labeling: OCR, table extraction, layout analysis
  • Enhanced AI assistance capabilities

    • Integration of more pre-trained models
    • Support for custom model imports
    • Continuous learning: Labeled data automatically optimizes models
    • Smart recommendations: Suggesting the best labeling strategy based on the scenario
  • Improved team collaboration features

    • Real-time collaboration: Multiple people labeling the same project simultaneously
    • Task management: Smart assignment, progress tracking, performance statistics
    • Quality management: Multi-level review, sampling mechanisms, quality reports
    • Permission management: Fine-grained role and permission controls

Technical Development Directions

AI capabilities:

  • Continuously improving AI model performance

    • Keeping up with the latest academic research
    • Collaborating with top AI laboratories
    • Continuously optimizing model inference speed
    • Supporting edge device deployment
  • Optimizing natural language understanding

    • Supporting more complex labeling instructions
    • Multi-language support: Chinese, English, Japanese, Korean, and more
    • Context understanding: Remembering conversation history
    • Intent recognition: Understanding what users truly want
  • Improving labeling accuracy

    • Continuously collecting user feedback to optimize models
    • Incorporating more domain knowledge
    • Multi-model ensemble for improved robustness
    • Uncertainty estimation, intelligently requesting human assistance

Platform capabilities:

  • Support for more data formats

    • Images: JPEG, PNG, TIFF, RAW, HEIC
    • Video: MP4, AVI, MOV, WebM
    • 3D: PLY, PCD, LAS, OBJ
    • Medical: DICOM, NIfTI
    • Remote sensing: GeoTIFF, HDF5
  • Optimized processing performance

    • Streaming processing for large files, supporting GB-scale data
    • GPU acceleration, 10x inference speed improvement
    • Distributed processing, supporting massive datasets
    • Smart caching, reducing redundant computation
  • Enhanced stability

    • 99.9% service availability target
    • Automatic fault recovery
    • Multiple data backups
    • Security compliance: GDPR, SOC2

Our commitment: TjMakeBot will always be guided by user needs, continuously innovating and evolving. We believe the best products are co-created with users. We welcome every user's suggestions and feedback -- let's shape the future of AI labeling tools together.

Conclusion

Over the next 10 years, AI labeling tools will become smarter, more natural, and more efficient. From multimodal AI to zero-shot learning, from platform-based development to industry-specific solutions, technology will continue to break through and applications will continue to expand.

Let's review what we've explored in this article:

Technology level:

  • Multimodal AI will break down data type barriers, achieving "full-sensory" understanding
  • Zero-shot learning will eliminate the "cold start" problem, making AI instantly usable
  • Active learning will transform labeling from "physical labor" to "intellectual work"
  • Natural interaction will make labeling as simple as conversation

Industry level:

  • Platform-based development will connect the entire chain from data to model to application
  • Industry-specific solutions will go deep into vertical domains, providing specialized services
  • Marketplaces for data, models, and services will give rise to new business models
  • AI labeling will shift from a "cost center" to a "value center"

User level:

  • 95%+ efficiency improvement, 95%+ cost reduction, 99%+ quality
  • Dramatically lower learning barriers -- anyone can use AI labeling tools
  • Human-AI collaboration becomes mainstream, with humans focusing on creative work

Remember:

  • Technology keeps innovating -- every year brings new breakthroughs, so maintain a learning mindset
  • Applications keep expanding -- AI labeling is penetrating every industry
  • User needs keep evolving -- from "usable" to "user-friendly" to "intelligent"
  • Tools keep evolving -- today's "cutting-edge tech" will become tomorrow's "standard feature"

Standing at the beginning of 2026, we are in the golden age of AI labeling tool development. Over the past decade, we witnessed the leap from manual labeling to AI-assisted labeling; over the next decade, we will witness the transformation from AI-assisted to AI-led labeling.

This is not just technological progress -- it's the liberation of productivity. As AI takes on the heavy lifting of labeling, humans will have more time and energy to invest in creative work -- designing better AI systems, solving more complex problems, and creating greater value.

Choose TjMakeBot and embrace the future of AI labeling tools!

Let's witness and participate in this transformation together.


Legal Disclaimer: The content of this article is for reference only and does not constitute any legal, business, or technical advice. When using any tools or methods, please comply with relevant laws and regulations, respect intellectual property rights, and obtain necessary authorizations. All company names, product names, and trademarks mentioned in this article are the property of their respective owners.

About the Author: The TjMakeBot team focuses on AI data labeling tool development, dedicated to driving continuous innovation and development in AI labeling tools.

Keywords: AI labeling future, labeling tool trends, technology predictions, AI development, TjMakeBot, future technology