Header image

Explore all articles in Others

Knowledge

Others

+0

    Must-Have Tools for Business Analyst

    In today’s fast-evolving tech world, working smart has become even more crucial than working hard. In IT environments — and in any modern business — managing a growing amount of complex work can’t rely solely on memory, scattered emails, or individual Excel sheets. One of the most effective ways to boost productivity intelligently is through the use of supporting tools.This isn’t just a trend anymore — it’s quickly becoming the standard in many companies. For Business Analysts (BAs), the right tools don’t just make you more efficient — they make you more professional. Let’s explore some essential tools every BA should have in their toolkit 👇 1. Draw.io A free, intuitive diagramming tool to visualize processes, systems, data, or ideas.It’s ideal for modeling workflows and mapping business logic. Key Features: Free and no registration required — just go to diagrams.net.Flexible storage — save files locally or to Google Drive, OneDrive, GitHub, GitLab.Rich icon library — supports UML, BPMN, flowcharts, network diagrams, and more.UML & BPMN ready — perfect for use cases, activity diagrams, and business flows.Easy collaboration when stored on shared drives.Cross-platform — available on web, desktop, and as a VS Code extension. Limitations: Real-time collaboration isn’t as strong as tools like Figma.Performance may drop with very large or complex diagrams. 2. Miro Miro is an online collaborative whiteboard designed for teams to brainstorm, plan, and visualize ideas in real-time. Key Features: Infinite canvas — visualize projects without space limits.Real-time collaboration — comment, vote, and co-edit instantly.Rich templates — includes user story maps, journey maps, mindmaps, Kanban boards, and wireframes.Integrations — connects with Jira, Confluence, Slack, Teams, Google Drive, and more.Great for mapping processes, use cases, roadmaps, or even UI mockups. Limitations: Free plan limits the number of boards.Large boards with many assets may slow down performance. 3. Trello Trello is a Kanban-based task management tool that helps teams visualize and track progress easily. Key Features: Simple drag-and-drop interface.Highly customizable boards, lists, and cards.Each card can include checklists, attachments, labels, due dates, and assignees.Seamless integration with Google Drive, Slack, Jira, GitHub, and others.Real-time updates across all team members.Works on web, desktop, and mobile. Limitations: Free plan limits the number of integrations (Power-Ups). 4. Jira Jira by Atlassian is the industry-standard project management tool for Agile teams. Key Features: Built for Scrum and Kanban teams.Highly customizable workflows, fields, and automation rules.Transparent tracking of tasks, blockers, and progress.Integrates with hundreds of DevOps, CI/CD, and testing tools.Scales from individual tasks to enterprise-level project portfolios. Limitations: Steep learning curve for beginners.Can be costly for large teams.Requires experienced admins for setup and maintenance.May run slower on large, complex projects. 5. Typescale A handy tool for generating consistent typography systems (font size, line height, spacing) for web or app design. Key Features: Automates type scale creation.Multiple presets and flexible customizations.Preview and export CSS directly.Ensures responsive and accessible typography. Limitations: Not suitable for all design systems or content types.Limited control over detailed responsive behavior. 6. Adobe Color An intuitive color palette generator to create harmonious and accessible color schemes. Key Features: Easy-to-use color wheel with real-time updates.Auto-generates color harmonies based on color theory.Supports HEX, RGB, and CMYK formats.Integrates seamlessly with Adobe tools like Photoshop, Illustrator, and XD.Community palette sharing and inspiration gallery. Limitations: Contrast still needs manual checking for accessibility.Some auto-generated palettes may need manual tweaking.Colors can look different on various screens. 7. Contrast Checker A simple but vital tool to ensure readability and accessibility by checking text and background contrast per WCAG standards. Key Features: Simple interface — input colors and get instant feedback.Ensures compliance with accessibility guidelines.Real-time updates as you adjust colors.Bridges design and development — everyone can validate contrast easily. Limitations: Doesn’t reflect results accurately for complex backgrounds.Doesn’t account for font size, spacing, or user testing conditions. Why Use These Tools? Transparency: Everything — from tasks to deadlines — is clearly tracked. For example, Trello helps answer questions like “Who’s doing what?” and “What’s the current status?”Visualization: Tools like Draw.io help transform abstract logic into clear, easy-to-understand diagrams.Collaboration: Integrating tools like Miro, Jira, or Slack ensures everyone stays aligned and reduces miscommunication. Tips for Getting Started Start small: You don’t need every tool at once. Begin with Jira or Trello, then expand.Build shared habits: Tools only work when the whole team uses them consistently.Learn by doing: Explore free trials and tutorials, then apply them directly in your current projects.Stay updated: Tools evolve fast — keeping up helps you stay ahead. Using tools isn’t just about having more software — it’s about changing the way we work.They make our processes more transparent, our teamwork more seamless, and our output more efficient. For Business Analysts, these tools are not just “nice-to-have” — they’re what turn you from a task executor into a strategic enabler for your team. Read more related articles from SupremeTech!

    31/10/2025

    157

    Sang Ngo

    Knowledge

    +1

    • Others

    Must-Have Tools for Business Analyst

    31/10/2025

    157

    Sang Ngo

    Knowledge

    Others

    Our culture

    +0

      How to Step Out of the “Forwarder” Shadow?

      Have you ever, as a Comtor or Business Analyst (BA), felt like… a messenger? Every time the client asks something, you turn to the team, copy their answer, translate it, and send it back — just passing messages instead of actually owning the conversation. At SupremeTech, our BA team jokingly calls this role the “Professional Forwarder.” Through many “lost in translation” moments, we’ve learned valuable lessons on how to step out of that shadow — to become real connectors between the client and the team. Let’s hear from our BA team as they share practical tips to help you move beyond being a “forwarder” drawn directly from real project experience. Signs You Might Be Forwarding Too Much 1. The classic line: “Let me check with the team.”It’s not wrong — but if you’re saying it too often, it might mean you don’t fully understand the issue. 2. Lack of confidence in meetings: Many new BAs struggle with open-ended questions. When you don’t fully understand the product, you can’t confidently answer questions from both the client and your internal team. The PM asks about progress, you look at the Sprint Backlog full of numbers — and still don’t know where to start. 3. Avoiding technical talk: The moment you hear technical terms, you “pass the ball” to the PTL — without really understanding what’s being discussed. 3 Steps to Escape the “Forwarder Manager” Role So, how can you move from being a Forwarder to becoming a true communicator — someone who understands, connects, and leads discussions effectively? Here are three simple but powerful steps you can start practicing right away: 1. Before Forwarding, Ask Yourself: Do I understand at least 70% of this content?Have I tried to reproduce the bug, test the feature in the DEV environment, or explore the possible cause myself?If I were the dev/tester receiving this message, would I have enough context to understand it?Can I classify the issue — is it about UI/UX, logic, data, or business flow?Can I try to answer part of it first, then confirm later? 👉 This habit helps you learn something new every day, instead of just finishing tasks every day. 2. In Every Meeting – Observe and Lead What is the team really discussing? Do I understand the big picture?If the conversation is technical, how does it relate to the overall context?Is anyone confused? Can I help clarify? If you find yourself unsure about all three — take notes, take notes, and take notes.Meeting minutes and your own notes will help you retain details and follow up later for deeper understanding. 3. Build Strong Foundations Whether you’re a Comtor, BA, or PO, a solid foundation in product knowledge, business logic, and basic technical understanding helps you make better decisions — and lead your team effectively. Don’t get stuck thinking “that’s not my task.” Instead, learn actively by: Reading about technical keywords used in your project.Redrawing the business flow yourself to truly understand it.Asking devs, QCs, PTLs, and clients for their perspectives.Finding a technical advisor who can review your understanding and answer your tech-related questions. Every time you’re about to forward a message, pause for a minute — dig a little deeper.Each pause adds to your knowledge and analytical mindset. These small daily efforts will sharpen your skills and confidence — helping you grow not only as a professional BA, but also as a potential Project Leader who truly adds value to the team.

      31/10/2025

      168

      BA Team

      Knowledge

      +2

      • Others
      • Our culture

      How to Step Out of the “Forwarder” Shadow?

      31/10/2025

      168

      BA Team

      Top 10 Digital Commerce Companies in Vietnam

      Knowledge

      Others

      +0

        Top 10 Digital Commerce Companies in Vietnam

        Vietnam has emerged as one of Southeast Asia’s fastest-growing digital commerce markets. With over 100 million people, a rapidly expanding middle class, and high internet penetration (more than 75%), the country offers fertile ground for e-commerce businesses to thrive. According to Vietnam News, local consumers spent about US$16 billion online in 2024 on major platforms like Shopee, Lazada, and TikTok Shop. Meanwhile, e.vnexpress.net reports that the total market size has reached US$22 billion, making Vietnam the third-largest e-commerce market in Southeast Asia. Experts project the market will continue growing at a CAGR of over 21% until 2030, reaching nearly US$62.5 billion (Mordor Intelligence). This impressive growth makes choosing the right technology partner crucial for businesses aiming to scale digital commerce operations in Vietnam. To help you navigate the landscape, Supreme Tech has curated a list of the Top 10 Digital Commerce Companies in Vietnam, highlighting their strengths and expertise. SupremeTech SupremeTech is a product-focused Agile development company in Vietnam. SupremeTech is currently serving clients across Japan, US, and Australia. They specialize in digital transformation and software solutions for big corporations in retail, healthcare, F&B, etc. Established in 2020, SupremeTech has grown rapidly from just a few members at the beginning to over 180 employees.  At SupremeTech, we implement the Scrum methodology and Agile framework to enhance efficiency and innovation. We optimize and leverage the Agile process to deliver a working product faster than a standard sprint. We provide real-time progress reports for each project because we value transparency and collaboration. AI-assisted development is currently being applied to custom software projects to foster delivery time and optimize cost for clients. Founded: 2020Team size: 180+ employeesKey clients: Enterprises and multinational brands in industries such as Retail, E-commerce, Healthcare, and Human Resources. Strengths: Agile Offshore Dedicated TeamsDigital Transformation for Retail BrandsWeb & Mobile Application DevelopmentCloud Infrastructure Migration & DevOpsOTT Streaming White-label AppsISO/IEC 27001:2022 certified, ISTQB Partner Program member Kyanon Digital Kyanon Digital is a leading technology company in Vietnam specializing in digital commerce solutions, with the slogan “Making Digital Impact that Matters”. Founded in 2012, the company provides end-to-end services that help businesses design, build, and scale their digital commerce platforms. Their expertise covers B2B, D2C, marketplaces, composable commerce, and omni-channel growth.  With a strong focus on Agile development, seamless system integration, user-centric design, and long-term optimization, Kyanon Digital positions itself as a trusted partner that delivers not just digital commerce platforms but also sustainable growth and innovation for clients. Founded: 2012Team size: 500+Key clients: Leading retail groups in Japan, Thailand, NutriAsia, confidential regional enterprises… Strengths: Wide service coverage: Expertise in B2B, marketplace, composable commerce, and omni-channel solutions.Data integration & personalization: Strong capabilities in unifying customer data, enabling predictive analytics, and creating personalized customer experiences.User-centric design: Focus on seamless omni-channel journeys with intuitive, mobile-friendly interfaces.Agile & engineering excellence: Proven Agile methodology, cloud-native and microservices architecture, plus ISO-certified 9001 and 27001.Long-term support: Provides ongoing operations, maintenance, and optimization beyond system launch.Trusted by top brands: Collaborates with Sharp, Central Retail, Unilever, Starbucks, and other major enterprises. Afocus Afocus is a team of passionate design-thinkers, curious product strategists, and innovative digital transformers living in Vietnam. They are focused on products, not projects, with your business growth being our highest priority from day one. From ideas to delivery, Afocus supports each client along the full life cycle of their digital initiatives: Analyzing business, marketing and sales targets, competition and constraints,Identifying and collecting requirements,Establishing, redefining & implementing branding, marketing and advertising strategies,Elaborating concepts (IA & Wireframe/Mockup) from simple business ideas,Designing Responsive & intuitive customer & User Interface (UI: Look & Feel) / Experience (UX) and system architecture,Coding sites/apps/softs rather on an agile and test-driven mode,Controlling / Assuring quality with international standards (+ user testing),Deploying and following-up evolutive and corrective maintenance.Optimizing traffic (ASO/SEO), usage & sales with data collection, analysis & reporting… Groove Technology Groove Technology is the first and last stop for companies worldwide that need support to develop digital products and custom software solutions. Their integrated resource model paves the way for your technology projects to be completed sooner, with less effort. They help businesses expand their software development capabilities. How? Ready-made and well-oiled offshore teams at your disposalProactive and innovative software development approachesA partner that prioritises trust and delivering quality solutions Adamo Software As the top software development company based in Vietnam, Adamo Software surpasses edge-cutting digital solutions for global organizations with the aim of adopting new technologies and transforming business operations. Adamo offers full-cycle and customized software development services with high-quality and lucrative solutions. Listed as the top 10 Vietnam’s software development companies, Adamo excels at mobile app development, web-based solutions, website development, and portal development. Their skillful and experienced developers provide you with innovative, efficient, valuable-tailored, and sustainable digital solutions. Whether it is a user-centric app or transformative corporation-level software solutions, Adamo will transform your business ideas into superb software products with continuous support. CodeNinja At CodeNinja, they believe that there’s a lot of untapped engineering potential in the world and they’re here to tap it. They’re a mission-driven software company of 250+ engineers striving to solve the world’s hardest problems for people, businesses, and Governments by tapping the untapped engineering potential of High-Growth and emerging markets. Their mission is to improve the lives of three billion people living in emerging markets by creating opportunities in technology. SECOMM SECOMM is a full-service ecommerce solution provider using various platforms, tools, and technologies to satisfy all the business’s complex systems. Ecommerce ConsultingEcommerce DevelopmentEcommerce MaintenanceEcommerce Acceleration BSS Commerce BSS Commerce is a global full-service eCommerce agency that provides cutting-edge technology solutions to B2B, B2C, and B2B2C businesses. They are empowered by partnerships with multiple platform providers and highly-qualified experts with customer-centric value at heart. As an accredited eCommerce solution provider, BSS offers a comprehensive eCommerce strategy to accelerate your business through wide-scale service on multiple platforms. They also enhance your eCommerce systems with highly-recommended Magento Extensions, Shopify Apps & Shopware Extensions. They make your eCommerce vision to life with our Global Standard, Best-in-class Service, and Solution-oriented mindset. Magenest JSC Magenest is a one-stop digital solution provider with a special focus on eCommerce systems, ERP/CRM platforms, Cloud Infrastructure, Digital Marketing, and more. As a leading technology solution company in APAC, they have helped brands activate and scale their digital presence, transform business operations, and empower the workforce through our solutions with Adobe Magento Commerce, Odoo, HubSpot, and Amazon Web Services. The quality of their work is backed by industry leaders: SM Markets, Abbott, Heineken, Trung Nguyen Legend, Bibomart, ACFC, Hoang Phuc International, etc. AMELA Technology Amela Technology is a global IT services and consulting company established in Hanoi city (Vietnam). They bring your idea to life by bridging technological gaps and manpower shortages with the following top-tier solutions: Software Outsourcing & DevelopmentEmerging tech: Blockchain, IoT, and AI solutionsWeb & Mobile App DevelopmentEmbedded Systems Quality Control & TestingStart-up supportingHuman resource introductionEngineer dispatching In the course of their development, they have pleased clients from one of the most demanding markets in Japan in a variety of industries, including eLearning, eCommerce, live streaming, healthcare, and ERP. Why Work with Digital Commerce Companies in Vietnam? Cost-effective yet high-quality talent: Vietnam offers competitive rates with strong technical expertise.Deep understanding of local & ASEAN markets: Local partners have practical insights into consumer behavior in the region.Modern methodologies (Agile, Composable, Modular): These companies adopt cutting-edge approaches to keep pace with market shifts.End-to-end support: From consulting and implementation to scaling and maintenance, businesses are fully supported. Final thoughts Vietnam’s digital commerce market is booming, presenting huge opportunities for both local and international businesses. By collaborating with the right technology partner, companies can accelerate growth, enhance customer experiences, and scale sustainably in this competitive market. Are you looking to build or expand your digital commerce capabilities? Get in touch with SupremeTech today and discover how we can turn your vision into a scalable success story.

        26/09/2025

        803

        Quy Huynh

        Knowledge

        +1

        • Others

        Top 10 Digital Commerce Companies in Vietnam

        26/09/2025

        803

        Quy Huynh

        tips when joining AI Hackathon

        AI

        AI Applications

        Knowledge

        +1

        • Others

        How Could You Join a Hackathon Without Knowing This?

        In the ever-evolving world of programming, the emergence of intelligent support tools is changing the way we write code. Copilot, often described as “AI-powered Pair Programming”, promises to revolutionize the workflow of software developers. In this article, I’ll focus on GitHub Copilot, the AI tool I personally use every day when coding. What is GitHub Copilot? GitHub Copilot is an AI assistant integrated into IDEs (VS Code, IntelliJ IDEA/PyCharm, Neovim) developed by GitHub and OpenAI. It provides context-aware code suggestions as you type, and includes Copilot Chat for Q&A directly inside the IDE. Key Advantages of GitHub Copilot Faster coding: Reduce time spent on repetitive tasks with context-aware suggestions (functions, code blocks, basic tests).Learn new technologies quickly: Get API/syntax examples directly in your IDE; ask further via Copilot Chat.Automate boring work: Scaffold endpoints, write boilerplate, create sample tests, suggest snippets, and ensure consistent formatting.Seamless IDE integration: Works in VS Code, JetBrains, Neovim; suggestions appear as ghost text/inline as you type. Limitations to Keep in Mind Not always accurate: May generate syntax, logic, or performance errors.Solution: Always review, run lint/tests, and benchmark when needed.Security & copyright risks: Could resemble public code or leak if sensitive data is pasted.Solution: Enable “block suggestions matching public code,” avoid entering secrets, follow organizational policies.Risk of dependency: Over-reliance may weaken fundamental coding skills.Solution: Use Copilot for speed, but keep code reviews and tests.Limited domain knowledge: Suggestions may not fit specific business contexts.Solution: Break down requests, add examples/constraints, manually refine critical parts. Quick Start (VS Code) Install extensions: GitHub Copilot and (optional) GitHub Copilot Chat.Log in to GitHub and enable suggestions in Settings.Create a new file, describe requirements in Vietnamese/English within comments or docstrings.Press Tab to accept, Esc to skip. Check IDE shortcuts for more. Simple Examples Just comment your request, and GitHub Copilot will write code for you. Example 1: Utility function to validate email (JavaScript) // Write function isValidEmail(email: string): boolean function isValidEmail(email) { return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email); } Note: The regex above is basic — adjust according to project needs. Example 2: Quick API skeleton (Node.js/Express) // Create route GET /health that returns { status: 'ok' } app.get('/health', (req, res) => { res.json({ status: 'ok' }); }); Example 3: Basic unit test (Jest) // Write test for sum(a,b): 1+2=3, -1+1=0 test('sum basics', () => { expect(sum(1, 2)).toBe(3); expect(sum(-1, 1)).toBe(0); }); Tips for Using GitHub Copilot Effectively Write clear descriptions/comments: specify input, output, constraints, and examples.Always check & optimize code: run lint/tests, review performance, security.Break down complex requests: guide step by step for more accurate suggestions.Use Copilot Chat to research/explain, but always verify with original docs. Key Notes Enable “block suggestions matching public code” in organizational projects.Avoid pasting secrets (keys, credentials, sensitive data) into prompts. Conclusion GitHub Copilot is an AI assistant that helps you code faster, learn new tech quickly, and automate repetitive tasks — but you still need to review, test, and follow security policies. My Personal Experience Coding with GitHub Copilot Before using GitHub Copilot: Spent lots of time on repetitive, structured code.Slowed down by switching between coding and researching online. When first trying Copilot: Felt efficiency in simple features/functions.Struggled with complex features — Copilot often generated unnecessary code.Spent extra time reviewing Copilot’s output. After long-term use: Significantly reduced time on repetitive tasks (boilerplate, data mapping, simple CRUD, …).More consistent code (naming, structure), better documentation (docs, README) thanks to quick suggestions.Changed workflow: “comment-first” or “test-first” to guide Copilot, using Chat to refine and explain.Formed a risky habit: accepting Copilot’s suggestions too quickly without reviewing. Start Small & Measure Effectiveness Enable Copilot in your IDE, try with a utility function or basic test, turn on the “block public code” filter, and avoid pasting secrets. After one week, measure effectiveness (task completion time, amount of boilerplate written manually, number of minor bugs), then decide how much to apply in projects. Good luck using GitHub Copilot effectively — and may you achieve great success at the Hackathon!

        22/08/2025

        624

        AI

        +3

        • AI Applications
        • Knowledge
        • Others

        How Could You Join a Hackathon Without Knowing This?

        22/08/2025

        624

        Knowledge

        Others

        Tech Stack

        +0

          Level Up Your Code: Transitioning to Validated Environment Variables

          Validated Environment variables play a critical role in software projects of all sizes. As projects grow, so does the number of environment variables—API keys, custom configurations, feature flags, and more. Managing these variables effectively becomes increasingly complex. If mismanaged, they can lead to severe bugs, server crashes, and even security vulnerabilities.  While there’s no one-size-fits-all solution, having some structure in how we manage environment variables can really help reduce mistakes and confusion down the road. In this article, I’ll share how I’ve been handling them in my own projects and what’s worked well for me so far. My Personal Story When I first started programming, environment variables were a constant source of headaches. I often ran into problems like: Misspelled variable names.Failure to retrieve variable values, even though I was sure they were set.Forgetting to define variables entirely, leading to runtime errors. These issues were tricky to detect. Typically, I wouldn’t notice anything was wrong until the application misbehaved or crashed. Debugging these errors was tedious—tracing back through the code to find that the root cause was a missing or misconfigured environment variable. For a long time, I struggled with managing environment variables. Eventually, I discovered a more effective approach: validating all required variables before running the application. This process has saved me countless hours of debugging and has become a core part of my workflow. Today, I want to share this approach with you. A Common Trap in Real Projects Beyond personal hiccups, I’ve also seen issues arise in real-world projects due to manual environment handling. One particular pitfall involves relying on if/else conditions to set or interpret environment variables like NODE_ENV. For example: if (process.env.NODE_ENV === "production") { // do something } else { // assume development } This type of conditional logic can seem harmless during development, but it often leads to incomplete coverage during testing. Developers typically test in development mode and may forget or assume things will "just work" in production. As a result, issues are only discovered after the application is deployed — when it's too late. In one of our team’s past projects, this exact scenario caused a production bug that slipped through all local tests. The root cause? A missing environment variable that was only required in production, and the conditional logic silently skipped it in development. This highlights the importance of failing fast and loudly—ideally before the application even starts. And that’s exactly what environment variable validation helps with. The Solution: Validating Environment Variables The secret to managing environment variables efficiently lies in validation. Instead of assuming all necessary variables are correctly set, validate them at the application’s startup. This prevents the application from running in an incomplete or misconfigured state, minimizing runtime errors and improving overall reliability. Benefits of Validating Environment Variables Error Prevention: Catch missing or misconfigured variables early.Improved Debugging: Clear error messages make it easier to trace issues.Security: Ensures sensitive variables like API keys are set correctly.Consistency: Establishes a standard for how environment variables are managed across your team. Implementation Here’s a simple and structured way to validate environment variables in a TypeScript project. Step 1: Define an Interface Define the expected environment variables using a TypeScript interface to enforce type safety. export interface Config { NODE_ENV: "development" | "production" | "test"; SLACK_SIGNING_SECRET: string; SLACK_BOT_TOKEN: string; SLACK_APP_TOKEN: string; PORT: number; } Step 2: Create a Config Loader Write a function to load and validate environment variables. This loader ensures that each variable is present and meets the expected type or format. Step 3: Export the Configuration Use the config loader to create a centralized configuration object that can be imported throughout your project. import { loadConfig } from "./loader"; export const config = loadConfig(); Conclusion Transitioning to validated environment variables is a straightforward yet powerful step toward building more reliable and secure applications. By validating variables during startup, you can catch misconfigurations early, save hours of debugging, and ensure your application is always running with the correct settings.

          09/07/2025

          523

          Knowledge

          +2

          • Others
          • Tech Stack

          Level Up Your Code: Transitioning to Validated Environment Variables

          09/07/2025

          523

          Knowledge

          Others

          Tech Stack

          +0

            Build Smarter: Best Practices for Creating Optimized Dockerfile

            If you’ve been using Docker in your projects, you probably know how powerful it is for shipping consistent environments across teams and systems. It's time to learn how to optimize dockerfile. But here’s the thing: a poorly written Dockerfile can quickly become a hidden performance bottleneck. Making your images unnecessarily large, your build time painfully slow, or even causing unexpected behavior in production. I’ve seen this firsthand—from early projects where we just “made it work” with whatever Dockerfile we had, to larger systems where the cost of a bad image multiplied across services. My name is Bao. After working on several real-world projects and going through lots of trial and error. I’ve gathered a handful of practical best practices to optimize Dockerfile that I’d love to share with you. Whether you’re refining a production-grade image or just curious about what you might be missing. Let me walk you through how I approach Docker optimization. Hopefully it’ll save you time, headaches, and a few docker build rage moments 😅. Identifying Inefficiencies in Dockerfile: A Case Study Below is the Dockerfile we’ll analyze: Key Observations: 1. Base Image: The Dockerfile uses ubuntu:latest, which is a general-purpose image. While versatile, it is significantly larger compared to minimal images like ubuntu:slim or Node.js-specific images like node:20-slim, node:20-alpine. 2. Redundant Package Installation: Tools like vim, wget, and git are installed but may not be necessary for building or running the application. 3. Global npm Packages: Pages like nodemon, ESLint, and prettier are installed globally. These are typically used for development and are not required in a production image. 4. Caching Issues: COPY . . is placed before npm install, invalidating the cache whenever any application file changes, even if the dependencies remain the same. 5. Shell Customization: Setting up a custom shell prompt (PS1) is irrelevant for production environments, adding unnecessary steps. 6. Development Tool in Production: The CMD uses nodemon, which is a development tool, to run the application Optimized your Docker Image Here’s how we can optimize the Dockerfile step by step. Showing the before and after for each section with the result to clearly distinguish the improvements. 1. Change the Base Image Before: FROM ubuntu:latest RUN apt-get update && apt-get install -y curl && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Use ubuntu:latest, a general-purpose image that is large and includes many unnecessary tools. After: FROM node:20-alpine Switches to node:20-alpine, a lightweight image specifically tailored for Node.js applications. Result: With the first change being applied, the image size is drastically reduced by about ~200MB.  2. Simplify Installed Packages Before: RUN apt-get update && apt-get install -y \ curl \ wget \ git \ vim \ python3 \ make \ g++ && \ curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Installs multiple tools (curl, wget, vim, git) and Node.js manually, increasing the image size and complexity. After: RUN apk add --no-cache python3 make g++ Uses apk (Alpine’s package manager) to install only essential build tools (python3, make, g++). Result: The image should be cleaner and smaller after removing the unnecessary tools, packages. (~250MB vs ~400MB with the older version) 3. Leverage Dependency Caching Before: COPY . . RUN npm install Copies all files before installing dependencies, causing cache invalidation whenever any file changes, even if dependencies remain unchanged. After: COPY package*.json ./ RUN npm install --only=production COPY . . Copies only package.json and package-lock.json first, ensuring that dependency installation is only re-run when these files change.Installs only production dependencies (--only=production) to exclude devDependencies. Result: Faster rebuilds and a smaller image by excluding unnecessary files and dependencies. 4. Remove Global npm Installations Before: RUN npm install -g nodemon eslint pm2 typescript prettier Installs global npm packages (nodemon, eslint, pm2, ect.) that are not needed in production, increasing image size. After: Remove Entirely: Global tools are omitted because they are unnecessary in production. Result: Reduced image size and eliminated unnecessary layers. 5. Use a Production-Ready CMD Before: CMD ["nodemon", "/app/bin/www"] Uses nodemon, which is meant for development, not production. Result: A streamlined and efficient startup command. 6. Remove Unnecessary Shell Customization Before: ENV PS1A="💻\[\e[33m\]\u\[\e[m\]@ubuntu-node\[\e[36m\][\[\e[m\]\[\e[36m\]\w\[\e[m\]\[\e[36m\]]\[\e[m\]: " RUN echo 'PS1=$PS1A' >> ~/.bashrc Sets and applies a custom shell prompt that has no practical use in production After: Remove Entirely: Shell customization is unnecessary and is removed. Result: Cleaner image with no redundant configurations or layers. Final Optimized Dockerfile FROM node:20-alpine WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . EXPOSE 3000 CMD ["node", "/app/bin/www"] 7. Leverage Multi-Stage Builds to Separate Build and Runtime In many Node.js projects, you might need tools like TypeScript or linters during the build phase—but they’re unnecessary in the final production image. That’s where multi-stage builds come in handy. Before: Everything—from installation to build to running—happens in a single image, meaning all build-time tools get carried into production. After: You separate the "build" and "run" stages, keeping only what’s strictly needed at runtime. Result: Smaller, cleaner production imageBuild-time dependencies are excludedFaster and safer deployments Final Optimized Dockerfile # Stage 1 - Builder FROM node:20-alpine AS builder WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . # Stage 2 - Production FROM node:20-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app ./ EXPOSE 3000 CMD ["node", "/app/bin/www"] Bonus. Don’t Forget .dockerignore Just like .gitignore, the .dockerignore file excludes unnecessary files and folders from the Docker build context (like node_modules, .git, logs, environment files, etc.). Recommended .dockerignore: node_modules .git *.log .env Dockerfile.dev tests/ Why it matters: Faster builds (Docker doesn’t copy irrelevant files)Smaller and cleaner imagesLower risk of leaking sensitive or unnecessary files Results of Optimization 1. Smaller Image Size: The switch to node:20-alpine and removal of unnecessary packages reduced the image size from 1.36GB, down to 862MB. 2. Faster Build Times: Leveraging caching for dependency installation speeds up rebuilds significantly.Build No Cache:Ubuntu (Old Dockerfile): ~126.2sNode 20 Alpine (New Dockerfile): 78.4sRebuild With Cache (After file changes):Ubuntu: 37.1s (Re-run: npm install)Node 20 Alpine: 8.7s (All Cached) 3. Production-Ready Setup: The image now includes only essential build tools and runtime dependencies, making it secure and efficient for production. By following these changes, your Dockerfile is now lighter, faster, and better suited for production environments. Let me know if you’d like further refinements! Conclusion Optimizing your Dockerfile is a crucial step in building smarter, faster, and more efficient containers. By adopting best practices: such as choosing the right base image, simplifying installed packages, leveraging caching, and using production-ready configurations, you can significantly enhance your build process and runtime performance. In this article, we explored how small, deliberate changes—like switching to node:20-alpine, removing unnecessary tools, and refining dependency management—can lead to.

            08/07/2025

            624

            Knowledge

            +2

            • Others
            • Tech Stack

            Build Smarter: Best Practices for Creating Optimized Dockerfile

            08/07/2025

            624

            View Transitions API

            Knowledge

            Others

            Software Development

            +0

              How to Create Smooth Navigation Transitions with View Transitions API and React Router?

              Normally, when users move between pages in a web app, they see a white flash or maybe a skeleton loader. That’s okay, but it doesn’t feel smooth. Try View Transitions API! Imagine you have a homepage showing a list of movie cards. When you click one, it takes you to a detail page with a big banner of the same movie. Right now, there’s no animation between these two screens, so the connection between them feels broken. With the View Transitions API, we can make that connection smoother. It creates animations between pages, helping users feel like they’re staying in the same app instead of jumping from one screen to another. Smooth and connected transition using View Transitions API In this blog, you’ll learn how to create these nice transitions using the View Transitions API and React Router v7. Basic Setup The easiest way to use view transitions is by adding the viewTransition prop to your React Router links: import { NavLink } from 'react-router'; <NavLink to='/movies/avengers-age-of-ultron' viewTransition> Avengers: Age of Ultron </NavLink> Only cross-fade animation without element linking It works — but it still feels a bit plain. The whole page fades, but nothing stands out or feels connected. Animating Specific Elements In the previous example, the entire page takes part in the transition. But sometimes, you want just one specific element — like an image — to animate smoothly from one page to another. Let’s say you want the movie image on the homepage to smoothly turn into the banner on the detail page. We can do that by giving both images the same view-transition-name. // app/routes/home.tsx export default function Home() { return ( <NavLink to='/movies/avengers-age-of-ultron' viewTransition> <img className='card-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> <span>Avengers: Age of Ultron</span> </NavLink> ); } // app/routes/movie.tsx export default function Movie() { return ( <img className='movie-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> ); } // app.css ... /* This class assign to the image of the movie card in the home page */ .card-image { view-transition-name: movie-image; } /* This class assign to the image of the movie in the movie details page */ .movie-image { view-transition-name: movie-image; } ... Now, when you click a movie card, the image will smoothly grow into the banner image on the next page. It feels much more connected and polished. Animating a single element with view-transition-name Handling Dynamic Data  This works great for a single element, but what happens if you have a list of items, like multiple movies? If you assign the same view-transition-name to all items, the browser won’t know which one to animate. Each transition name must be unique per element — but hardcoding different class names for every item is not scalable, especially when the data is dynamic. Incorrect setup – Same view-transition-name used for all items in a list. The Solution: Assign view-transition-name during navigation Instead of setting the view-transition-name upfront, a more flexible approach is to add it dynamically when navigation starts — that is, when the user clicks a link. // app/routes/home.tsx export default function Home({ loaderData: movies }: Route.ComponentProps) { return ( <ul> {movies.map((movie) => ( <li key={movie.id}> <NavLink to={`/movies/${movie.id}`} viewTransition> <img className='card-image' src={movie.image} alt={movie.title} /> <span>{movie.title}</span> </NavLink> </li> ))} </ul> ); } // app/routes/movie.tsx export default function Movie({ loaderData: movie }: Route.ComponentProps) { return ( <img className='movie-image' src={movie.image} alt={movie.title} /> ); } // app.css ... /* Assign transition names to elements during navigation */ a.transitioning .card-image { view-transition-name: movie-image; } .movie-image { view-transition-name: movie-image; } ... Final output – Smooth transition with dynamic list items Here’s what happens: When a user clicks a link, React Router adds a transitioning class to it.That class tells the browser which image should animate.On the detail page, the image already has view-transition-name: movie-image, so it matches. This way, you can reuse the same CSS for all items without worrying about assigning unique class names ahead of time. You can explore the full source code below: Live DemoSource on GitHub Browser Support The View Transitions API is still relatively new, and browser support is limited:  Chrome (from version 111)Edge (Chromium-based)Firefox & Safari: Not supported yet (as of May 2025) You should always check for support before using it in production. Conclusion The View Transitions API gives us a powerful tool to deliver smooth, native-feeling page transitions in our web apps. By combining it with React Router, you can: Enable basic transitions with minimal setupAnimate specific elements using view-transition-nameHandle dynamic content gracefully by assigning transition names at runtime Hope this guide helps you create more fluid and polished navigation experiences in your React projects!

              08/07/2025

              954

              Knowledge

              +2

              • Others
              • Software Development

              How to Create Smooth Navigation Transitions with View Transitions API and React Router?

              08/07/2025

              954

              Knowledge

              Software Development

              +0

                Uploading objects to AWS S3 with presigned URLs

                I’m Quang Tran, a full-stack developer with four years of experience. I've had my fair share of struggles when it comes to uploading files to cloud storage services like Amazon S3. Not too long ago, I used to rely on the traditional method: the server would receive the file from the client, store it temporarily, and then push it to S3. What seemed like a simple task quickly became a resource-draining nightmare, and my server started to “cry out” from the overload. But then, I discovered Presigned URLs—the technique that allows clients to upload files directly to S3 without burdening the server. Presigned URLs help us solve the issues mentioned above. Today, I will show you how to implement this in SupremeTech's article. Traditional file uploading When you use applications with file upload features, such as uploading photos to social media platforms, the process is mainly done by selecting a photo from your device and sending it to the server for storage. This process started with traditional upload and has evolved over time. The steps were as follows: The user selects a photo from the device.The client sends a request to upload the photo to the server.The server receives and processes the photo, then stores it in the storage. The traditional file upload process This process may seem simple, but it can impact the server's performance. Imagine when thousands of people are uploading data at the same time, and the data size is large; your server could become overloaded. This requires you to scale your application server and ensure available network bandwidth. After identifying this issue, AWS introduced the Presigned URL feature as a solution. So, what is a Presigned URL? What is the Presigned URL? A presigned URL is a URL that you can provide to your users to grant temporary access to a specific S3 object. You can use a presigned URL to read or upload an object to S3 directly without passing it through the server. This allows an upload without requiring another party to have AWS security credentials or permissions. If an object with the same key already exists in the bucket specified in the presigned URL, Amazon S3 replaces the existing object with the uploaded object. When creating a presigned URL, you must provide the following information: Amazon S3 bucket nameAn object key (if reading this object will be in your Amazon S3 bucket, if uploading, this is the file name to be uploaded)An HTTP method (GET for reading objects or PUT for uploading)An expiration time intervalAWS credentials (AWS access key ID, AWS secret key ID) You can use the presigned URL multiple times, up to the expiration date and time. Amazon S3 grants access to the object through a pre-signed URL, which can only be generated by the bucket's owner or anyone with valid security credentials. How to upload a file to S3 using a presigned URL? Workflow for uploading a file using a presigned URL How to create a presigned URL for uploading an object? We already know what a presigned URL is, so let's explore how to create one and upload a photo through it. There are two ways to create a presigned URL for uploading, which are: Using the AWS Toolkit for Visual Studio (Windows).Using the AWS SDKs to generate a PUT presigned URL for uploading a file. In this blog, I will introduce how to use the AWS JS SDK (AWS SDK for JavaScript) to generate a PUT presigned URL for uploading a file. Using the AWS JS SDK First, you need to log in to the AWS console with an account with permission to read and write objects to S3. When you use the AWS SDKs to generate a presigned URL, the maximum expiration time is 7 days from the creation date.You need to prepare the AWS credentials (AWS access key ID, AWS secret key ID), region, S3 bucket name, and object key before uploading and securely storing them on the server. Before we start creating a presigned URL, there are a few important things to note as follows: Block all public access to the S3 bucket (crucial for data security, preventing accidental data leaks or unauthorized access to sensitive information)Never store AWS credentials in front-end code (access key ID, secret key ID)Use environment variables and secret managers to store AWS credentials securelyLimit IAM permissions (least privilege principle - AWS recommendation)Configure CORS to allow other origins to send file upload requests To create a direct image upload flow to S3, follow these steps: On the front-end, you call the API to create a presigned URL on the back-end server and send the key of the object you want to store.On the back end, you create an API to generate the pre-signed URL, as shown below, and respond to the front-end. import { PutObjectCommand, S3Client, } from '@aws-sdk/client-s3'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; const createPresignedUrlWithClient = async ({ region, bucket, key }) => { const client = new S3Client({   region,   credentials: {     accessKeyId: 'your access key id',     secretAccessKey: 'your secret key id',   }, }); const command = new PutObjectCommand({ Bucket: bucket, Key: key }); return await getSignedUrl(client, command, { expiresIn: 36000 }); }; const presignedUrl = await createPresignedUrlWithClient({ region: 'ap-southeast-1', bucket: 'your-bucket-name', key: 'example.txt', }); The front-end receives the response and performs a PUT request to upload the file directly to the S3 bucket. <!-- wp:table --> <figure class="wp-block-table"><table><tbody><tr><td><strong>const</strong> putToPresignedUrl = (presignedUrl) =&gt; {<br>&nbsp; <strong>const</strong> data = 'Hello World!';<br>&nbsp; axios.put(presignedUrl, data);<br>};</td></tr></tbody></table></figure> <!-- /wp:table --> Object in S3 after upload Content of the object An example of a presigned URL:  https://presignedurldemo.s3.ap-southeast-1.amazonaws.com/example.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAUPMYNICO4HMDKONH%2F20250101%2Fap-southeast-1%2Fs3%2Faws4_request&amp;X-Amz-Date=20250101T021742Z&amp;X-Amz-Expires=36000&amp;X-Amz-Signature=9f29f0f34a19c9e9748eb2fc197138d4345e0124746f99ad56e27e08886fa01a&amp;X-Amz-SignedHeaders=host&amp;x-amz-checksum-crc32=AAAAAA%3D%3D&amp;x-amz-sdk-checksum-algorithm=CRC32&amp;x-id=PutObject Among them, there are query parameters that are required for S3 to determine whether the upload operation is allowed. Query parameterDescriptionX-Amz-AlgorithmThe signing algorithm used. Typically AWS4-HMAC-SHA256X-Amz-CredentialA string that includes the access key ID and the scope of the request. Format: <AccessKey>/<Date>/<Region>/s3/aws4_request. It helps AWS identify the credentials used to sign the request.X-Amz-DateThe timestamp (in UTC) when the URL was generated. Format: YYYYMMDD'T'HHMMSS'Z'.X-Amz-ExpiresThe number of seconds before the URL expires (e.g., 3600 for one hour). After this time, the URL becomes invalid.X-Amz-SignedHeadersA list of headers that are included in the signature. Commonly just host, but can include content-type, etc., if specified during signing.X-Amz-SignatureThe actual cryptographic signature ensures that the request has not been tampered with and proves that the sender has valid credentials. Now that you know how to generate a presigned URL, let's examine some limitations you should consider. Limitations of Using S3 Presigned URLs 5GB Upload Limit: 5GB per-request upload limit in S3, with no easy way to increase itURL Management Overhead: A unique URL must be generated for every upload, increasing code complexity and backend logic.Risk of Unintended Access: Anyone with the URL can upload until it expires. There's no built-in user validation.Client-Side Upload Issues: Client-side uploads can cause data inconsistency if an error occurs mid-upload. See more: Mastering AWS Lambda: An Introduction to Serverless ComputingAWS Lambda Triggers: How to Trigger a Lambda Function?Best Practices for Building Reliable AWS Lambda Functions Conclusion You have learned another way to upload objects to S3 directly without requiring public access to your S3 bucket. Please choose the method that best fits your use case. References: AWS (no date) Uploading objects - Amazon Simple Storage Service, AWS. Available at: https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html (Accessed: 19 May 2025).  AWS (no date b) Uploading objects with presigned URLs - Amazon Simple Storage Service, AWS. Available at: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html (Accessed: 19 May 2025).

                19/05/2025

                1.39k

                Quang Tran M.

                Knowledge

                +1

                • Software Development

                Uploading objects to AWS S3 with presigned URLs

                19/05/2025

                1.39k

                Quang Tran M.

                Knowledge

                +0

                  Best Practices for Building Reliable AWS Lambda Functions

                  Welcome back to the "Mastering AWS Lambda with Bao" series! The previous episode explored how AWS Lambda connects to the world through AWS Lambda triggers and events. Using S3 and DynamoDB Streams triggers, we demonstrated how Lambda automates workflows by processing events from multiple sources. This example provided a foundation for understanding Lambda’s event-driven architecture. However, building reliable Lambda functions requires more than understanding how triggers work. To create AWS lambda functions that can handle real-world production workloads, you need to focus on optimizing performance, implementing robust error handling, and enforcing strong security practices. These steps optimize your Lambda functions to be scalable, efficient, and secure. In this episode, SupremeTech will explore the best practices for building reliable AWS Lambda functions, covering two essential areas: Optimizing Performance: Reducing latency, managing resources, and improving runtime efficiency.Error Handling and Logging: Capturing meaningful errors, logging effectively with CloudWatch, and setting up retries. Adopting these best practices, you’ll be well-equipped to optimize Lambda functions that thrive in production environments. Let’s dive in! Optimizing Performance Optimize the Lambda function's performance to run efficiently with minimal latency and cost. Let's focus first on Cold Starts, a critical area of concern for most developers. Understanding Cold Starts What Are Cold Starts? A Cold Start occurs when AWS Lambda initializes a new execution environment to handle an incoming request. This happens under the following circumstances: When the Lambda function is invoked for the first time.After a period of inactivity (execution environments are garbage collected after a few minutes of no activity – meaning it will be shut down automatically).When scaling up to handle additional concurrent requests. Cold starts introduce latency because AWS needs to set up a new execution environment from scratch. Steps Involved in a Cold Start: Resource Allocation:AWS provisions a secure and isolated container for the Lambda function.Resources like memory and CPU are allocated based on the function's configuration.Execution Environment Initialization:AWS sets up the sandbox environment, including:The /tmp directory is for temporary storage.Networking configurations, such as Elastic Network Interfaces (ENI), for VPC-based Lambdas.Runtime Initialization:The specified runtime (e.g., Node.js, Python, Java) is initialized.For Node.js, this involves loading the JavaScript engine (V8) and runtime APIs.Dependency Initialization:AWS loads the deployment package (your Lambda code and dependencies).Any initialization code in your function (e.g., database connections, library imports) is executed.Handler Invocation:Once the environment is fully set up, AWS invokes your Lambda function's handler with the input event. Cold Start Latency Cold start latency varies depending on the runtime, deployment package size, and whether the function runs inside a VPC: Node.js and Python: ~200ms–500ms for non-VPC functions.Java or .NET: ~500ms–2s due to heavier runtime initialization.VPC-Based Functions: Add ~500ms–1s due to ENI initialization. Warm Starts In contrast to cold starts, Warm Starts reuse an already-initialized execution environment. AWS keeps environments "warm" for a short time after a function is invoked, allowing subsequent requests to bypass initialization steps. Key Differences: Cold Start: New container setup → High latency.Warm Start: Reused container → Minimal latency (~<100ms). Reducing Cold Starts Cold starts can significantly impact the performance of latency-sensitive applications. Below are some actionable strategies to reduce cold starts, each with good and bad practice examples for clarity. 1. Use Smaller Deployment Packages to optimize lambda function Good Practice: Minimize the size of your deployment package by including only the required dependencies and removing unnecessary files.Use bundlers like Webpack, ESBuild, or Parcel to optimize your package size.Example: const DynamoDB = require('aws-sdk/clients/dynamodb'); // Only loads DynamoDB, not the entire SDK Bad Practice: Bundling the entire AWS SDK or other large libraries without considering modular imports.Example: const AWS = require('aws-sdk'); // Loads the entire SDK, increasing package size Why It Matters: Smaller deployment packages load faster during the initialization phase, reducing cold start latency. 2. Move Heavy Initialization Outside the Handler Good Practice: Place resource-heavy operations, such as database or SDK client initialization, outside the handler function so they are executed only once per container lifecycle – a cold start.Example: const DynamoDB = new AWS.DynamoDB.DocumentClient(); exports.handler = async (event) => {     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Bad Practice: Reinitializing resources inside the handler for every invocation.Example: exports.handler = async (event) => {     const DynamoDB = new AWS.DynamoDB.DocumentClient(); // Initialized on every call     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Why It Matters: Reinitializing resources for every invocation increases latency and consumes unnecessary computing power. 3. Enable Provisioned Concurrency1 Good Practice: Use Provisioned Concurrency to pre-initialize a set number of environments, ensuring they are always ready to handle requests.Example:AWS CLI: aws lambda put-provisioned-concurrency-config \ --function-name myFunction \ --provisioned-concurrent-executions 5 AWS Management Console: Why It Matters: Provisioned concurrency ensures a constant pool of pre-initialized environments, eliminating cold starts entirely for latency-sensitive applications. 4. Reduce Dependencies to optimize the lambda function Good Practice: Evaluate your libraries and replace heavy frameworks with lightweight alternatives or native APIs.Example: console.log(new Date().toISOString()); // Native JavaScript API Bad Practice: Using heavy libraries for simple tasks without considering alternatives.Example: const moment = require('moment'); console.log(moment().format()); Why It Matters: Large dependencies increase the deployment package size, leading to slower initialization during cold starts. 5. Avoid Unnecessary VPC Configurations Good Practice: Place Lambda functions outside a VPC unless necessary. If a VPC is required (e.g., to access private resources like RDS), optimize networking using VPC endpoints.Example:Use DynamoDB and S3 directly without placing the Lambda inside a VPC. Bad Practice: Deploying Lambda functions inside a VPC unnecessarily, such as accessing services like DynamoDB or S3, which do not require VPC access.Why It’s Bad: Placing Lambda in a VPC introduces additional latency due to ENI setup during cold starts. Why It Matters: Functions outside a VPC initialize faster because they skip ENI setup. 6. Choose Lightweight Runtimes to optimize lambda function Good Practice: Use lightweight runtimes like Node.js or Python for faster initialization than heavier runtimes like Java or .NET.Why It’s Good: Lightweight runtimes require fewer initialization resources, leading to lower cold start latency. Why It Matters: Heavier runtimes have higher cold start latency due to the complexity of their initialization process. Summary of Best Practices for Cold Starts AspectGood PracticeBad PracticeDeployment PackageUse small packages with only the required dependencies.Bundle unused libraries, increasing the package size.InitializationPerform heavy initialization (e.g., database connections) outside the handler.Initialize resources inside the handler for every request.Provisioned ConcurrencyEnable provisioned concurrency for latency-sensitive applications.Ignore provisioned concurrency for high-traffic functions.DependenciesUse lightweight libraries or native APIs for simple tasks.Use heavy libraries like moment.js without evaluating lightweight alternatives.VPC ConfigurationAvoid unnecessary VPC configurations; use VPC endpoints when required.Place all Lambda functions inside a VPC, even when accessing public AWS services.Runtime SelectionChoose lightweight runtimes like Node.js or Python for faster initialization.Use heavy runtimes like Java or .NET for simple, lightweight workloads. Error Handling and Logging Error handling and logging are critical for optimizing your Lambda functions are reliable and easy to debug. Effective error handling prevents cascading failures in your architecture, while good logging practices help you monitor and troubleshoot issues efficiently. Structured Error Responses Errors in Lambda functions can occur due to various reasons: invalid input, AWS service failures, or unhandled exceptions in the code. Properly structured error handling ensures that these issues are captured, logged, and surfaced effectively to users or downstream services. 1. Define Consistent Error Structures Good Practice: Use a standard error format so all errors are predictable and machine-readable.Example: {   "errorType": "ValidationError",   "message": "Invalid input: 'email' is missing",   "requestId": "12345-abcd" } Bad Practice: Avoid returning vague or unstructured errors that make debugging difficult. { "message": "Something went wrong", "error": true } Why It Matters: Structured errors make debugging easier by providing consistent, machine-readable information. They also improve communication with clients or downstream systems by conveying what went wrong and how it should be handled. 2. Use Custom Error Classes Good Practice: In Node.js, define custom error classes for clarity: class ValidationError extends Error {   constructor(message) {     super(message);     this.name = "ValidationError";     this.statusCode = 400; // Custom property   } } // Throwing a custom error if (!event.body.email) {   throw new ValidationError("Invalid input: 'email' is missing"); } Bad Practice: Use generic errors for everything, making identifying or categorizing issues hard.Example: throw new Error("Error occurred"); Why It Matters: Custom error classes make error handling more precise and help segregate application errors (e.g., validation issues) from system errors (e.g., database failures). 3. Include Contextual Information in Logs Good Practice: Add relevant information like requestId, timestamp, and input data (excluding sensitive information) when logging errors.Example: console.error({     errorType: "ValidationError",     message: "The 'email' field is missing.",     requestId: context.awsRequestId,     input: event.body,     timestamp: new Date().toISOString(), }); Bad Practice: Log errors without any context, making debugging difficult.Example: console.error("Error occurred"); Why It Matters: Contextual information in logs makes it easier to identify what triggered the error and where it happened, improving the debugging experience. Retry Logic Across AWS SDK and Other Services Retrying failed operations is critical when interacting with external services, as temporary failures (e.g., throttling, timeouts, or transient network issues) can disrupt workflows. Whether you’re using AWS SDK, third-party APIs, or internal services, applying retry logic effectively can ensure system reliability while avoiding unnecessary overhead. 1. Use Exponential Backoff and Jitter Good Practice: Apply exponential backoff with jitter to stagger retry attempts. This avoids overwhelming the target service, especially under high load or rate-limiting scenarios.Example (General Implementation): async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error; // Rethrow after final attempt             const backoff = delay * 2 ** (attempt - 1) + Math.random() * delay; // Add jitter             console.log(`Retrying in ${backoff.toFixed()}ms...`);             await new Promise((res) => setTimeout(res, backoff));         }     } } // Usage Example const result = await retryWithBackoff(() => callThirdPartyAPI()); Bad Practice: Retrying without delays or jitter can lead to cascading failures and amplify the problem. for (let i = 0; i < retries; i++) {     try {         return await callThirdPartyAPI();     } catch (error) {         console.log("Retrying immediately...");     } } Why It Matters: Exponential backoff reduces pressure on the failing service, while jitter randomizes retry times, preventing synchronized retry storms from multiple clients. 2. Leverage Built-In Retry Mechanisms Good Practice: Use the built-in retry logic of libraries, SDKs, or APIs whenever available. These are typically optimized for the specific service.Example (AWS SDK): const DynamoDB = new AWS.DynamoDB.DocumentClient({     maxRetries: 3, // Number of retries     retryDelayOptions: { base: 200 }, // Base delay in ms }); Example (Axios for Third-Party APIs):Use libraries like axios-retry to integrate retry logic for HTTP requests. const axios = require('axios'); const axiosRetry = require('axios-retry'); axiosRetry(axios, {     retries: 3, // Retry 3 times     retryDelay: (retryCount) => retryCount * 200, // Exponential backoff     retryCondition: (error) => error.response.status >= 500, // Retry only for server errors }); const response = await axios.get("https://example.com/api"); Bad Practice: Writing your own retry logic unnecessarily when built-in mechanisms exist, risking suboptimal implementation. Why It Matters: Built-in retry mechanisms are often optimized for the specific service or library, reducing the likelihood of bugs and configuration errors. 3. Configure Service-Specific Retry Limits Good Practice: Set retry limits based on the service's characteristics and criticality.Example (AWS S3 Upload): const s3 = new AWS.S3({ maxRetries: 5, // Allow more retries for critical operations retryDelayOptions: { base: 300 }, // Slightly longer base delay }); Example (Database Queries): async function queryDatabaseWithRetry(queryFn) {     await retryWithBackoff(queryFn, 5, 100); // Retry with custom backoff logic } Bad Practice: Allowing unlimited retries can cause resource exhaustion and increase costs. while (true) {     try {         return await callService();     } catch (error) {         console.log("Retrying...");     } } Why It Matters: Excessive retries can lead to runaway costs or cascading failures across the system. Always define a sensible retry limit. 4. Handle Transient vs. Persistent Failures Good Practice: Retry only transient failures (e.g., timeouts, throttling, 5xx errors) and handle persistent failures (e.g., invalid input, 4xx errors) immediately.Example: const isTransientError = (error) =>     error.code === "ThrottlingException" || error.code === "TimeoutError"; async function callServiceWithRetry() {     await retryWithBackoff(() => {         if (!isTransientError(error)) throw error; // Do not retry persistent errors         return callService();     }); } Bad Practice: Retrying all errors indiscriminately, including persistent failures like ValidationException or 404 Not Found. Why It Matters: Persistent failures are unlikely to succeed with retries and can waste resources unnecessarily. 5. Log Retry Attempts Good Practice: Log each retry attempt with relevant context, such as the retry count and delay. async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error;             console.log(`Attempt ${attempt} failed. Retrying in ${delay}ms...`);             await new Promise((res) => setTimeout(res, delay));         }     } } Bad Practice: Failing to log retries makes debugging or understanding the retry behavior difficult. Why It Matters: Logs provide valuable insights into system behavior and help diagnose retry-related issues. Summary of Best Practices for Retry logic AspectGood PracticeBad PracticeRetry LogicUse exponential backoff with jitter to stagger retries.Retry immediately without delays, causing retry storms.Built-In MechanismsLeverage AWS SDK retry options or third-party libraries like axios-retry.Write custom retry logic unnecessarily when optimized built-in solutions are available.Retry LimitsDefine a sensible retry limit (e.g., 3–5 retries).Allow unlimited retries, risking resource exhaustion or runaway costs.Transient vs PersistentRetry only transient errors (e.g., timeouts, throttling) and fail fast for persistent errors.Retry all errors indiscriminately, including persistent failures like validation or 404 errors.LoggingLog retry attempts with context (e.g., attempt number, delay,  error) to aid debugging.Fail to log retries, making it hard to trace retry behavior or diagnose problems. Logging Best Practices Logs are essential for debugging and monitoring Lambda functions. However, unstructured or excessive logging can make it harder to find helpful information. 1. Mask or Exclude Sensitive Data Good Practice: Avoid logging sensitive information like:User credentialsAPI keys, tokens, or secretsPersonally Identifiable Information (PII)Use tools like AWS Secrets Manager for sensitive data management.Example: Mask sensitive fields before logging: const sanitizedInput = {     ...event,     password: "***", }; console.log(JSON.stringify({     level: "info",     message: "User login attempt logged.",     input: sanitizedInput, })); Bad Practice: Logging sensitive data directly can cause security breaches or compliance violations (e.g., GDPR, HIPAA).Example: console.log(`User logged in with password: ${event.password}`); Why It Matters: Logging sensitive data can expose systems to attackers, breach compliance rules, and compromise user trust. 2.  Set Log Retention Policies Good Practice: Set a retention policy for CloudWatch log groups to prevent excessive log storage costs.AWS allows you to configure retention settings (e.g., 7, 14, or 30 days). Bad Practice: Using the default “Never Expire” retention policy unnecessarily stores logs indefinitely. Why It Matters: Unmanaged logs increase costs and make it harder to find relevant data. Retaining logs only as long as needed reduces costs and keeps logs manageable. 3. Avoid Excessive Logging Good Practice: Log only what is necessary to monitor, troubleshoot, and analyze system behavior.Use info, debug, and error levels to prioritize logs appropriately. console.info("Function started processing..."); console.error("Failed to fetch data from DynamoDB: ", error.message); Bad Practice: Logging every detail (e.g., input payloads, execution steps) unnecessarily increases log volume.Example: console.log(`Received event: ${JSON.stringify(event)}`); // Avoid logging full payloads unnecessarily Why It Matters: Excessive logging clutters log storage, increases costs, and makes it harder to isolate relevant logs. 4. Use Log Levels (Info, Debug, Error) Good Practice: Use different log levels to differentiate between critical and non-critical information.info: For general execution logs (e.g., function start, successful completion).debug: For detailed logs during development or troubleshooting.error: For failure scenarios requiring immediate attention. Bad Practice: Using a single log level (e.g., console.log() everywhere) without prioritization. Why It Matters: Log levels make it easier to filter logs based on severity and focus on critical issues in production. Conclusion In this episode of "Mastering AWS Lambda with Bao", we explored critical best practices for building reliable AWS Lambda functions, focusing on optimizing performance, error handling, and logging. Optimizing Performance: By reducing cold starts, using smaller deployment packages, lightweight runtimes, and optimizing VPC configurations, you can significantly lower latency and optimize Lambda functions. Strategies like moving initialization outside the handler and leveraging Provisioned Concurrency ensure smoother execution for latency-sensitive applications.Error Handling: Implementing structured error responses and custom error classes makes troubleshooting easier and helps differentiate between transient and persistent issues. Handling errors consistently improves system resilience.Retry Logic: Applying exponential backoff with jitter, using built-in retry mechanisms, and setting sensible retry limits optimizes that Lambda functions gracefully handle failures without overwhelming dependent services.Logging: Effective logging with structured formats, contextual information, log levels, and appropriate retention policies enables better visibility, debugging, and cost control. Avoiding sensitive data in logs ensures security and compliance. Following these best practices, you can optimize lambda function performance, reduce operational costs, and build scalable, reliable, and secure serverless applications with AWS Lambda. In the next episode, we’ll dive deeper into "Handling Failures with Dead Letter Queues (DLQs)", exploring how DLQs act as a safety net for capturing failed events and ensuring no data loss occurs in your workflows. Stay tuned! Note: 1. Provisioned Concurrency is not a universal solution. While it eliminates cold starts, it also incurs additional costs since pre-initialized environments are billed regardless of usage. When to Use:Latency-sensitive workloads like APIs or real-time applications where even a slight delay is unacceptable.When Not to Use:Functions with unpredictable or low invocation rates (e.g., batch jobs, infrequent triggers). For such scenarios, on-demand concurrency may be more cost-effective.

                  13/01/2025

                  1.34k

                  Bao Dang D. Q.

                  Knowledge

                  +0

                    Best Practices for Building Reliable AWS Lambda Functions

                    13/01/2025

                    1.34k

                    Bao Dang D. Q.

                    Knowledge

                    +0

                      Triggers and Events: How AWS Lambda Connects with the World

                      Welcome back to the “Mastering AWS Lambda with Bao” series! In the previous episode, SupremeTech explored how to create an AWS Lambda function triggered by AWS EventBridge to fetch data from DynamoDB, process it, and send it to an SQS queue. That example gave you the foundational skills for building serverless workflows with Lambda. In this episode, we’ll dive deeper into AWS lambda triggers and events, the backbone of AWS Lambda’s event-driven architecture. Triggers enable Lambda to respond to specific actions or events from various AWS services, allowing you to build fully automated, scalable workflows. This episode will help you: Understand how triggers and events work.Explore a comprehensive list of popular AWS Lambda triggers.Implement a two-trigger example to see Lambda in action Our example is simplified for learning purposes and not optimized for production. Let’s get started! Prerequisites Before we begin, ensure you have the following prerequisites in place: AWS Account: Ensure you have access to create and manage AWS resources.Basic Knowledge of Node.js: Familiarity with JavaScript and Node.js will help you understand the Lambda function code. Once you have these prerequisites ready, proceed with the workflow setup. Understanding AWS Lambda Triggers and Events What are the Triggers in AWS Lambda? AWS lambda triggers are configurations that enable the Lambda function to execute in response to specific events. These events are generated by AWS services (e.g., S3, DynamoDB, API Gateway, etc) or external applications integrated through services like Amazon EventBridge. For example: Uploading a file to an S3 bucket can trigger a Lambda function to process the file.Changes in a DynamoDB table can trigger Lambda to perform additional computations or send notifications. How do Events work in AWS Lambda? When a trigger is activated, it generates an event–a structured JSON document containing details about what occurred Lambda receives this event as input to execute its function. Example event from an S3 trigger: { "Records": [ { "eventSource": "aws:s3", "eventName": "ObjectCreated:Put", "s3": { "bucket": { "name": "demo-upload-bucket" }, "object": { "key": "example-file.txt" } } } ] } Popular Triggers in AWS Lambda Here’s a list of some of the most commonly used triggers: Amazon S3:Use case: Process file uploads.Example: Resize images, extract metadata, or move files between buckets.Amazon DynamoDB Streams:Use case: React to data changes in a DynamoDB table.Example: Propagate updates or analyze new entries.Amazon API Gateway:Use case: Build REST or WebSocket APIs.Example: Process user input or return dynamic data.Amazon EventBridge:Use case: React to application or AWS service events.Example: Trigger Lambda for scheduled jobs or custom events. Amazon SQS:Use case: Process messages asynchronously.Example: Decouple microservices with a message queue.Amazon Kinesis:Use case: Process real-time streaming data.Example: Analyze logs or clickstream data.AWS IoT Core:Use case: Process messages from IoT devices.Example: Analyze sensor readings or control devices. By leveraging triggers and events, AWS Lambda enables you to automate complex workflows seamlessly. Setting Up IAM Roles (Optional) Before setting up Lambda triggers, we need to configure an IAM role with the necessary permissions. Step 1: Create an IAM Role Go to the IAM Console and click Create role.Select AWS Service → Lambda and click Next.Attach the following managed policies: AmazonS3ReadOnlyAccess: For reading files from S3.AmazonDynamoDBFullAccess: For writing metadata to DynamoDB and accessing DynamoDB Streams.AmazonSNSFullAccess: For publishing notifications to SNS.CloudWatchLogsFullAccess: For logging Lambda function activity.Click Next and enter a name (e.g., LambdaTriggerRole).Click Create role. Setting Up the Workflow For this episode, we’ll create a simplified two-trigger workflow: S3 Trigger: Processes uploaded files and stores metadata in DynamoDB.DynamoDB Streams Triggers: Sends a notification via SNS when new metadata is added. Step 1: Create an S3 Bucket Open the S3 Console in AWS.Click Create bucket and configure:Bucket name: Enter a unique name (e.g., upload-csv-lambda-st)Region: Choose your preferred region. (I will go with ap-southeast-1)Click Create bucket. Step 2: Create a DynamoDB Table Navigate to the DynamoDB Console.Click Create table and configure:Table name: DemoFileMetadata.Partition key: FileName (String).Sort key: UploadTimestamp (String). Click Create table.Enable DynamoDB Streams with the option New and old images. Step 3: Create an SNS Topic Navigate to the SNS Console.Click Create topic and configure: Topic type: Standard.Name: DemoFileProcessingNotifications. Click Create topic. Create a subscription. Confirm (in my case will be sent to my email). Step 4: Create a Lambda Function Navigate to the Lambda Console and click Create function.Choose Author from scratch and configure:Function name: DemoFileProcessing.Runtime: Select Node.js 20.x (Or your preferred version).Execution role: Select the LambdaTriggerRole you created earlier. Click Create function. Step 5: Configure Triggers Add S3 Trigger:Scroll to the Function overview section and click Add trigger. Select S3 and configure:Bucket: Select upload-csv-lambda-st.Event type: Choose All object create events.Suffix: Specify .csv to limit the trigger to CSV files. Click Add. Add DynamoDB Streams Trigger:Scroll to the Function overview section and click Add trigger. Select DynamoDB and configure:Table: Select DemoFileMetadata. Click Add. Writing the Lambda Function Below is the detailed breakdown of the Node.js Lambda function that handles events from S3 and DynamoDB Streams triggers (Source code). const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); try { if (event.Records[0].eventSource === "aws:s3") { // Process S3 Trigger for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); // Save metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); } } else if (event.Records[0].eventSource === "aws:dynamodb") { // Process DynamoDB Streams Trigger for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; // Construct notification message const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); // Send notification via SNS await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); } } } return { statusCode: 200, body: "Event processed successfully!", }; } catch (error) { console.error("Error processing event:", error); throw error; } }; Detailed Explanation Importing Required AWS SDK Modules const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); AWS SDK: Provides tools to interact with AWS services.S3 Module: Used to interact with the S3 bucket and retrieve file details.DynamoDB Module: Used to store metadata in the DynamoDB table.SNS Module: Used to publish messages to the SNS topic. Defining the SNS Topic ARN const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; This is the ARN of the SNS topic where notification will be sent. Replace it with the ARN of your actual topic. Handling the Lambda Event exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); The event parameter contains information about the trigger that activated the Lambda function.The event can be from S3 or DynamoDB Streams.The event is logged for debugging purposes. Processing the S3 Trigger if (event.Records[0].eventSource === "aws:s3") { for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); Condition: Checks if the event source is S3.Loop: Iterates over all records in the S3 event.Bucket Name and Object Key: Extracts the bucket name and object key from the event.decodeURIComponent() is used to handle special characters in the object key. Saving Metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); Timestamp: Captures the current time as the upload timestamp.DynamoDB Put Operation:Writes the file metadata to the DemoFileMetadata table.Includes the FileName, UploadTimestamp, and Status.Promise: The put method returns a promise, which is awaited to ensure the operation is completed. Processing the DynamoDB Streams Trigger } else if (event.Records[0].eventSource === "aws:dynamodb") { for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; Condition: Checks if the event source is DynamoDB Streams.Loop: Iterates over all records in the DynamoDB Streams event.INSERT Event: Filters only for INSERT operations in the DynamoDB table. Constructing and Sending the SNS Notification const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); Constructing the Message:Uses the file name and upload timestamp from the DynamoDB Streams event.SNS Publish Operation:Send the constructed message to the SNS topic.Promise: The publish method returns a promise, which is awaited. to ensure the message is sent. Error Handling } catch (error) { console.error("Error processing event:", error); throw error; } Any errors during event processing are caught and logged.The error is re-thrown to ensure it’s recorded in CloudWatch Logs. Lambda Function Response return {     statusCode: 200,     body: "Event processed successfully!", }; After processing all events, the function returns a successful response. Test The Lambda Function Upload the code into AWS Lambda. Navigate to the S3 Console and choose the bucket you linked to the Lambda Function. Upload a random.csv file to the bucket. Check the result:DynamoDB Table Entry SNS Notifications CloudWatch Logs So, we successfully created a Lambda function that triggered based on 2 triggers. It's pretty simple. Just remember to delete any services after use to avoid incurring unnecessary costs! Conclusion In this episode, we explored AWS Lambda's foundational concepts of triggers and events. Triggers allow Lambda functions to respond to specific actions or events, such as file uploads to S3 or changes in a DynamoDB table. In contrast, events are structured data passed to the Lambda function containing details about what triggered it. We also implemented a practical example to demonstrate how a single Lambda function can handle multiple triggers: An S3 trigger processed uploaded files by extracting metadata and saving it to DynamoDB.A DynamoDB Streams trigger sent notifications via SNS when new metadata was added to the table. This example illustrated the flexibility of Lambda’s event-driven architecture and how it integrates seamlessly with AWS services to automate workflows. In the next episode, we’ll discuss Best practices for Optimizing AWS Lambda Functions, optimizing performance, handling errors effectively, and securing your Lambda functions. Stay tuned to continue enhancing your serverless expertise!

                      10/01/2025

                      2.96k

                      Bao Dang D. Q.

                      Knowledge

                      +0

                        Triggers and Events: How AWS Lambda Connects with the World

                        10/01/2025

                        2.96k

                        Bao Dang D. Q.

                        Customize software background

                        Want to customize a software for your business?

                        Meet with us! Schedule a meeting with us!