Skip to main content

Below

Below is a structured set of question ideas tailored to the new features you've described, while incorporating product-management best practices. You'll notice each section addresses the key points of first impressions, satisfaction, usage patterns, and open-ended improvement ideas.


1. Overall Usage & Impressionโ€‹

  1. Usage Frequency

    • Example: "How often do you use INCL (the service)?"
      • Multiple times a day
      • Once a day
      • A few times a week
      • A few times a month
      • Rarely or never
        Why: Helps you segment users by engagement level.
  2. Overall Satisfaction

    • Example: "On a scale of 1โ€“5 (1 = Very Dissatisfied, 5 = Very Satisfied), how satisfied are you with the new version of INCL overall?"
      Why: Quick snapshot of user sentiment.
  3. Most/Least Valuable New Feature

    • Example: "Which new feature (e.g., automatic background refresh, shortcut keys, updated at method, etc.) do you find most valuable? Least valuable?"
      Why: Identifies what's driving user delight or frustration.

2. Page-Level Feedbackโ€‹

Below are question sets for each main page or feature you want feedback on. Use a similar format (first impression, satisfaction, ideas) to keep feedback structured and easy to analyze.

A. Analyzer Pageโ€‹

  1. First Impression

    • Example: "What was your first impression of the Bleeding Fast Analyzer on the Analyzer Page?"
      Why: Captures initial emotional and functional reactions.
  2. Satisfaction

    • Example: "How satisfied are you with the Analyzer Page's speed and accuracy? (1โ€“5 scale)"
      Why: Quantifies user sentiment.
  3. Ideas for Improvement

    • Example: "What improvements or additional capabilities would you like to see in the Analyzer Page?"
      Why: Generates direct insights on how to enhance the analyzer.

B. Graphsโ€‹

  1. First Impression

    • Example: "How would you rate the new multiple-graph visualization (up to 8 graphs and 50 jobs)?
      • Very Confusing
      • Somewhat Confusing
      • Neutral
      • Somewhat Clear
      • Very Clear"
        Why: Measures the clarity of this new, more robust graphing feature.
  2. Satisfaction

    • Example: "On a scale of 1โ€“5, how satisfied are you with the new graphing capabilities?"
      Why: Quick quantification for trending over time.
  3. Ideas for Improvement

    • Example: "Are there any additional metrics or views you wish you could visualize here?"
      Why: Surfaces needs for deeper analytics or different graph types.

C. Lab Page (Improved Project Tree)โ€‹

  1. First Impression

    • Example: "How intuitive did you find the new Project Tree layout on the Lab Page when you first saw it?"
      Why: Measures the learning curve and clarity.
  2. Satisfaction

    • Example: "How satisfied are you with navigating and organizing projects in the new Project Tree? (1โ€“5 scale)"
      Why: Assesses usability and efficiency gains.
  3. Ideas for Improvement

    • Example: "What changes or additional features would make the Lab Page more useful for your workflow?"
      Why: Directs you to potential enhancements or missing functionalities.

D. Right-Click / Multiple-Choice Features (Worklist)โ€‹

  1. First Impression & Discoverability

    • Example: "How quickly did you discover the right-click or multiple-choice features for creating worklists?"
      Why: Identifies if you need better onboarding or tooltips.
  2. Satisfaction & Usability

    • Example: "How satisfied are you with the ease of creating and managing worklists via the right-click menu? (1โ€“5 scale)"
      Why: Measures how much friction users experience with these shortcuts.
  3. Ideas for Improvement

    • Example: "What additional actions or shortcuts would you like to see in the right-click menu?"
      Why: Helps guide future right-click or context-menu expansions.

3. Log Page Enhancementsโ€‹

  1. First Impression of New Renderer

    • Example: "Did you notice the new VS Codeโ€“style renderer with syntax highlighting? If so, how helpful do you find it?"
      Why: Checks if the improvement is recognized and valuable.
  2. Granular Following Options

    • Example: "Which of the following options (Mouse Auto-scroll, go to start, go to end) do you use most often?"
      Why: Reveals which features users rely on, guiding focus for improvements.
  3. Satisfaction

    • Example: "Rate your satisfaction with the Log Page's new features (1โ€“5 scale)."
      Why: Provides a quick measure of overall sentiment.
  4. Ideas for Improvement

    • Example: "Are there any other log viewing or searching features that would make your job easier?"
      Why: Surfaces potential expansions (e.g., advanced search, filters, etc.).

4. Settings (Dark/Light Theme)โ€‹

  1. Theme Preference

    • Example: "Which theme do you prefer to use most often (Light or Dark)?"
      Why: Basic usage data for prioritizing future design or default settings.
  2. Satisfaction

    • Example: "How satisfied are you with the color schemes and readability of each theme?"
      Why: Gauges design effectiveness and helps refine color palettes.
  3. Additional Customization

    • Example: "Are there any additional customization options (e.g., font size, color presets) you'd like to see?"
      Why: Shows you how to cater to specific user needs/preferences.

5. Additional Feature Highlightsโ€‹

A. Automatic Background Refreshโ€‹

  1. Usage & Noticing

    • Example: "Have you noticed the automatic background refresh feature? If yes, how has it affected your workflow?"
      Why: Ensures users are aware of the feature and captures impact.
  2. Control & Customization

    • Example: "Would you like more control over the refresh frequency or notifications?"
      Why: Lets you know if users want more granular settings.

B. Shortcut Keysโ€‹

  1. Discovery & Usage

    • Example: "Did you discover or start using any new shortcut keys? If so, which ones?"
      Why: Identifies popular shortcuts and if discoverability is an issue.
  2. Effect on Productivity

    • Example: "Do you feel the new shortcuts have increased your productivity?
      • Yes, significantly
      • Somewhat
      • No noticeable difference
      • I didn't know there were new shortcuts"
        Why: Measures effectiveness and user awareness.
  3. Requests for More

    • Example: "Are there any additional shortcuts or custom shortcut configurations you'd find helpful?"
      Why: Helps expand keyboard-centric functionality.

C. Updated at Method & Hover Displaysโ€‹

  1. Usefulness & Clarity

    • Example: "Have you found the new 'Updated at' timestamp and the full display-on-hover feature useful?
      • Very Useful
      • Somewhat Useful
      • Neutral
      • Not Useful
      • Didn't Notice"
        Why: Verifies if subtle UX changes are recognized and appreciated.
  2. Any Suggestions

    • Example: "Is there anything you would change about these indicators or how they're displayed?"
      Why: Fine-tunes micro-interactions.

6. Wrapping Up: Open-Ended Questionsโ€‹

  1. Overall "One Thing"

    • Example: "If you could change or improve one thing about the new INCL, what would it be?"
      Why: Always useful for surfacing the single biggest pain point or wish.
  2. Additional Comments

    • Example: "Is there anything else you'd like to share regarding your experience with the new features?"
      Why: Gives users space to voice feedback you might not have anticipated.

Best Practices to Rememberโ€‹

  1. Keep it Short
    • Only include the questions most critical to your goals.
  2. Use Balanced Answer Choices
    • Provide positive, negative, and neutral options.
  3. Mix Quantitative & Qualitative
    • Combine rating scales (e.g., 1โ€“5) with open-ended questions for deeper insights.
  4. Make It Discoverable
    • Provide an in-app link or notification for the survey so users can quickly respond.
  5. Close the Loop
    • After collecting feedback, share any planned improvements or changes so users know they've been heard.

By asking these targeted questions for each new feature or page, you'll gain actionable insights into user satisfaction, usability, and adoption. Tailor the language and specific answer options to fit your brand voice and the depth of feedback you want. Good luck gathering great insights from your users!

Links to This Note