Skip to content

Feature Experimentation

Categories

JUMP TO ANOTHER FORUM

56 results found

  1. The Optimizely for Jira Integration (https://marketplace.atlassian.com/apps/1219783/optimizely-for-jira?tab=overview&hosting=cloud currently appears in all Jira projects when enabled for a Jira instance without the ability to turn off the integration for projects that don't involve/use Optimizely.
    The Jira Integration should be able to be enabled/disabled for each Jira Project so it only shows up for projects where Optimizely is relevant.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. IP filtering lets you exclude certain IP ranges from showing up in your experiment results. This is also how you can exclude yourself or your company from experiment results. - https://support.optimizely.com/hc/en-us/articles/4410283982989-IP-Filtering-Exclude-IP-addresses-or-ranges-from-your-results

    This is currently available in Web Experimentation but not in Feature Experimentation.

    Internal stakeholders and engineers are regularly forcing themselves into experiments to demo and debug, and this will be impacting our results. We would like to be able to exclude these

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Hey!

    As Team Lead Web Analytics I often see teams struggling to solve SRM issues.

    It would be immensely helpful if the results interface would provide a graph depicting the user distribution between variants. This seems like a low-hanging fruit for Optimizely and would facilitate debugging SRMs strongly.

    Thanks!

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. If a flag status is Draft and a user attempts to change the value of that flag, the UI should notify the user that they are changing a flag that is not yet in Running state.

    This will prevent cases where someone thinks they have changed a flag state, but as far as the system is concerned, they have not.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Gathering Feedback  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
    It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
    In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
    I am aware of Customer Profile Service but I see this as independent from that.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
    It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
    In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
    I am aware of Customer Profile Service but I see this as independent from that.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As of now, in Feature Experimentation, if a variant performs (very) bad, there is no way to deactivate it or set its behavior back to baseline without needing to create a new rule.
    This of course slows down experimentation speed.
    I do understand that the results of that variant are not usable after setting a variant's behavior back to baseline behavior. However, that is not the issue. The goal is to simply be able to continue letting the test run while disabling a bad-performing variant.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. If Optimizely's CDN goes down or is inaccessible, the SDKs don't have a default fallback mechanism to evaluate feature flags without access to the datafile hosted on the CDN.

    It's possible to initialize the SDK with a cached datafile, but that requires custom logic. Ideally, Optimizely could provide a default mechanism to provide a fallback datafile (e.g., a "relay proxy" service that caches the datafile, or a default mechanism within the SDKs).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  SDKs  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. It would be great to have some kind of filters that allow a deeper dive into data. So if we wanted to see how many people bought a licence AFTER triggering a specified conversion

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Gathering Feedback  ·  0 comments  ·  UX  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Customers want to have metrics they track in their analytics in Optimizely. Today they have to manually add new event instrumentation. Would be great to integrate with other analytics and import their event tracking in Optimizely (ie. similar to Segment integration)

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Filter activity notifications via webhook to only production environment.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Make SDKs compatible with OpenFeature standard.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  SDKs  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Ability for developers to enable an event inspector to ensure instrumentation is working correctly

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Almost none of my customers implement long-term flags. Every time they run a new experiment, they create a new flag, and remove it after. Even if they could have reused the flag for a new experiment a month later. This creates a lot of additional work.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    New  ·  0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Help customers understand how flags/experiments are affecting site performance.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 3 Next →
  • Don't see your idea?