Skip to content

Feature Experimentation

Categories

JUMP TO ANOTHER FORUM

56 results found

  1. While the decision object allows you to see which rule delivered a feature, it does not specify whether this rule is an A/B Test, Targeted Delivery, etc.
    Customer naming conventions could theoretically provide this kind of context, but we've received a request from Nike to make this available in the SDK itself.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. While the decision object returned from the SDK currently provides the flag status, variation key, and rule key, there is currently no built-in way to determine whether the rule that returned the decision was an A/B test or targeted delivery. A workaround would be to use specific naming conventions for different rule types, but it would be useful to have this information baked into the decision object by default.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Currently, the decision object returned by an SDK's decide method includes the flag, rule, and variation keys that a user was bucketed into but does not return the (experiment rule and variation key that the user was bucketed into.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. The Optimizely for Jira Integration (https://marketplace.atlassian.com/apps/1219783/optimizely-for-jira?tab=overview&hosting=cloud currently appears in all Jira projects when enabled for a Jira instance without the ability to turn off the integration for projects that don't involve/use Optimizely.
    The Jira Integration should be able to be enabled/disabled for each Jira Project so it only shows up for projects where Optimizely is relevant.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. This is a request to create a REST API endpoint that satisfies the following requirements.
    A List Flags endpoint (to return all flags that returns both flags with rules (including variation names, and those with targeted delivery specifying which variation is enabled in the delivery.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. When calling decideAll( with a User Profile Service (UPS, the method looks up the UPS for each flag.
    The request is that the SDKs be updated to default to just one lookup for all flags.
    This will prevent unnecessary resource consumption for customers who utilize decideAll( method and also implement/rely upon a USP.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. requester jake.schlack@dat.com

    Type of issue: Feature Experimentation
    Jake Schlack sent a feedback.
    Description: Good afternoon,
    My team and I would like to run an A/B test that measures the median time that occurs between two tracking events in our product.
    Based on your documentation, it looks like we can create a "value" property that tracks the time between these two events (in the second, concluding event.
    My question is: How do we then set up a "Total Value" A/B Test that looks at a metric other than counts or sums? In this case, we want to use a median, but…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. As an experimenter I would like to be able to have an audience auto populate for an entire project. The bot filtering provided within Optimizely does not exclude additional bots that we need to remove from testing as an organization, and therefore we need to apply an additional audience to our tests to remove these bots from all test. Currently we have to do this manually for every test which allows for human error. But we would like to be able to auto populate this for all feature experiments within a project to avoid this step being forgotten.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. requester chris.thompson@vividseats.com

    Type of issue: Product feedback
    Christopher Thompson sent a feedback.
    Description: When I am updating feature flags in app.optimizely.com, the cancel button is obstructed by the notification/feedback icon, and there's no way to ignore/clear the notifications.
    Can this be moved to another part of the screen so that I can clearly see which save/cancel/revert buttons are available? I would add a screenshot, but it doesn't look like this form will allow it.
    Thanks
    User email: chris.thompson@vividseats.com
    User id: vxrBU3yORCaFdlyq9VMemg
    Account: Vivid Seats
    Account id: d431d760-1424-9252-92b1-c84e8e5af9e8
    http://notify.aptrinsic.com/wf/open?upn=u001.UBmjMNjeoRXVAL5fZIj9r2FPj0g8IcH4DdrYL4fPfah4D3R0lViKImdF1fmAOOcaRyeBR6LSEO6zS6tfDSs06T0yQM35Eu1Jr879enrHEBNAHXCJKhTFnHNzUMsAFjD7gjT-2FT1kxN2GFM7nY2pW9Y87cdmadC7TDD6jFhFrzWp0VZN79SYBLgv2WzFJbvLXJhffi5iZ3LOGl1veJsJZl0VDQPDYKmVRTjee3IohbFMkW7Ku-2BBur8zt9szUI73Nyi

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. When a task is archived, a notification is sent to all watchers assigned to the task.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. requester ivan.njunjic@aura.com

    Type of issue: Feature Experimentation
    Ivan Njunjic sent a feedback.
    Description: I'd like to measure the effectiveness of our experimentation program through a quarter-long holdout experiment. Is there an easy, self-serve method to set up holdout groups (say, 5% of traffic that would not receive any experiment treatments during the quarter, or do we need to manage this in code via a separate flag? Thanks!
    User email: ivan.njunjic@aura.com
    User id: 3y-jacmkR2mDxmoPks63Vg
    Account id: 7e76915c-e202-a8ec-da88-eda5848b9576
    http://notify.aptrinsic.com/wf/open?upn=Q6elNXBfOd1okuIEgGFEjVYK-2FQeJKmhJxBg4Gdoz-2BpoHYFLIZK2l4Z3cLM28FjN48rQtYKJlQIVk6MrNwrilTz3oceTf5cdas3ShflIPI1NBLe8f3Ld5DuAwwhKHl3EazDUY6TP0JiEFq9GTDqeeJAT26dbbGSrZnzvQKBlYSviT35S8fQKPcHicRTQoGa4PU2MCDHNoxB-2BDxYsgBTJVnmqJ0NUeW43gFOgy0-2FNwQpr6bu1L3nCReD03QxBYZGIP

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. requester ahmet.atasoy01.ext@bbc.co.uk

    Type of issue: Feature Experimentation
    Description: Hi,
    We are using Feature Rollout functionality and having difficulties to enable/disable the feature for QA/Dev users.
    What we would like to have is that certain audience, could disable it completely but "Everyone" could be 100%. Looked at the documentation https://docs.developers.optimizely.com/feature-experimentation/docs/target-audiences#advanced-audience-information here and saw the following statement, which we could confirm that if you have an audience that is 0%, and "Everyone" was 100% - the feature for that audience becomes 100%.
    "For example, you cannot exclude all Brazilian users by simply dialing back to 0% traffic for that audience. That is…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. requester ivan.kucheriavenko@printify.com

    Type of issue: Feature Experimentation
    Ivan Kucheriavenko sent a feedback.
    Description: Hello!
    We use in Full Stack project API to update a whitelist - https://docs.developers.optimizely.com/full-stack-experimentation/reference/update_experiment
    This API is not available in Feature Experiment - https://docs.developers.optimizely.com/feature-experimentation/reference/experiments
    Could you please advise if there is a possibility to manage allowlist in Feature Experiment via API?
    Thank you!
    User email: ivan.kucheriavenko@printify.com
    User id: ZwTZutqQSxS-pFtGogihmA
    Account: Printify, Inc.
    Account id: cb9a320a-43ad-02f0-0cf2-6b972153b9ad
    http://notify.aptrinsic.com/wf/open?upn=Q6elNXBfOd1okuIEgGFEjVYK-2FQeJKmhJxBg4Gdoz-2BpoHYFLIZK2l4Z3cLM28FjN4gMdnLml0HMdUs1OuWnMIriGQp81KpODHwPTjdlqY9yxnE5px8uoxtnut6NM2wrj6w-2BKpHaYI9bY3GQXGwciIXD1BMoYxyBN1oKEP9O0k-2BA-2FyyJmSdyG0YnSb2UKZYzqBMpg9-2FIzj-2BCBFZlx-2F1fh16CqJVbK8o132ovJFgYjapSvUnypIQCBip3kAcUmJsvYU

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. As a developer I want to be able to use an Optimizely provider for the CNCF OpenFeature project so that I don't have to learn a vendor-specific SDK to integrate my codebase with Optimizely resulting in faster adoption of Optimizely in my applications

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. In Flags, it is not possible to change the order of variants and have a different "Baseline" variation by default in the Results Page.
    Regardless the possible workaround (change the baseline variation from the results page, the proper behaviour would be to being able to switch the order of the variants (and so decide what the default "Baseline" variant is directly in the Flags Page.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Clients wants to be able to have an approval workflow before a flag or test is started. They do this because they want to ensure that someone needs to get approval before changing the configuration of a flag to ensure a second pair of eyes before changes are made (similar to approving code changes.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 3 Next →
  • Don't see your idea?