56 results found
-
Results to Experiment Details Page Navigation Improvement
When you click on the Results button of a specific experiment, you want to be able to navigate from the results screen to the Experiment details page. Instead, currently, you have to go back to the main list of experiments, then find your specific experiment, to click into the details.
1 vote -
Versioning within an Experiment
A versioning capability within an experiment, so that when you are building, if someone makes unintended changes, they can revert back to the old version. There is interest in this feature both in the visual editor and for general changes in the experiment around page, audience, etc. These could be tackled separately.
1 vote -
custom content in ui
role: COE lead for experimentation
problem: lack of exposure to internal processes and standards and changes to them
outcome: i would like to be able to insert custom text into the UI at various points, including but not limited to:
* flag list
* create flag overlay
* flag page - ruleset list
* rule definition page
* audience list
* audience creation screen
* attribute list
* attribute creation screen
* event list
* event creation screen
* etc.basically anywhere you create, define, update and/or name things.
the intent is to provide content that descibes the current standards…
1 vote -
flag relationships
I manage platform use.
I am trying to make it easier for users to see all related flags that might be used for a single delivery.
I would like a way to list 'related flags' somewhere in the flag definition. there can be multiple flags related to multiple flags. one flag might have a few different relations. the relation list should be links to prod environment page for the defined related flag.1 vote -
Setting significance on experiment level
We are supporting over 25 teams running experiments, running close to 500 experiments per year. The requirements regarding significance level differ strongly from team to team - sometimes even from experiment to experiment.
It would help us a lot if we could set significance level for each experiment explicitely.
We also think this is a low-hanging fruit since we can see from API calls that significance level is actually a variable that already exists per experiment. Also - as results are calculated every time I view the results page, this appears to be possible to implement without too much difficulty?
2 votes -
more prominent/named link to flag/rule definition from results
As a member of a center of excellence,
I would like to see a prominent link to the rule configuration in the results page
So I don't have to explain to users all the time the most efficient manner to get back to the rule. This will decrease user frustration when trying to understand what metrics definitions are.
Most users have not figured out that the environment link takes you back to the rule itself.
1 vote -
Archive events in archived projects
The list events endpoint currently returns all events, including those from archived projects. This becomes problematic when there are a large number of such events. Could you enhance the archive project functionality so that it also archives the events within the project?
3 votes -
IP Filtering
IP filtering lets you exclude certain IP ranges from showing up in your experiment results. This is also how you can exclude yourself or your company from experiment results. - https://support.optimizely.com/hc/en-us/articles/4410283982989-IP-Filtering-Exclude-IP-addresses-or-ranges-from-your-results
This is currently available in Web Experimentation but not in Feature Experimentation.
Internal stakeholders and engineers are regularly forcing themselves into experiments to demo and debug, and this will be impacting our results. We would like to be able to exclude these
1 vote -
Provide SRM monitoring on the results page
Hey!
As Team Lead Web Analytics I often see teams struggling to solve SRM issues.
It would be immensely helpful if the results interface would provide a graph depicting the user distribution between variants. This seems like a low-hanging fruit for Optimizely and would facilitate debugging SRMs strongly.
Thanks!
1 vote -
1 vote
-
Rule Promotion/Copy
As a rule administrator in feature experimentation, I want to be able to move a single rule between environments without have to copy all rules from one environment to another and having those all reset to draft in the target environment.
3 votes -
Enhanced Search Functionality in Optimizely Experimentation
Problem:
Currently, the search function in Optimizely Experimentation only filters by a single word, even if multiple words are entered. For example, if a user searches for “Headers for PDP”, the results only return matches for “Headers”. This limits discoverability and forces users to rely on entering very specific single keywords.Proposed Solution:
Update the search functionality to apply all words entered in the query, rather than restricting results to the first word only. This way, users can perform both broad and specific searches as needed.Reasoning / Business Impact:
For customers running numerous tests, Efficiently searching and reviewing experiments…9 votes -
Changing the State of a Feature Flag in Draft Status Should Notify User that Flag is Not Yet in Running State
If a flag status is Draft and a user attempts to change the value of that flag, the UI should notify the user that they are changing a flag that is not yet in Running state.
This will prevent cases where someone thinks they have changed a flag state, but as far as the system is concerned, they have not.
1 vote -
Display bucketing ranges in the GUI (when changing traffic allocation)
I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
I am aware of Customer Profile Service but I see this as independent from that.1 vote -
Display bucketing ranges in the GUI (when changing traffic allocation)
I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
I am aware of Customer Profile Service but I see this as independent from that.1 vote -
Disable losing variant(s) during experiment
As of now, in Feature Experimentation, if a variant performs (very) bad, there is no way to deactivate it or set its behavior back to baseline without needing to create a new rule.
This of course slows down experimentation speed.
I do understand that the results of that variant are not usable after setting a variant's behavior back to baseline behavior. However, that is not the issue. The goal is to simply be able to continue letting the test run while disabling a bad-performing variant.1 vote -
Folder/Organization System
I would like the ability to create folders within projects to organize our work. This would allow users to organize work by developer or by area of the site.
4 votes -
JIRA Integration for Feature Flags
Unfortunately, in the new Flags UI the JIRA Integration is no longer available (as it is not yet migrated). My Idea Post is about requesting its availability.
In our Company we have a very close relationship between JIRA Tickets + Experiment Rules (1:1). That's why, this integration is/was helpful to relate Code/Work accordingly.
Many thanks in advance
Michael2 votes -
Datafile Relay Proxy
If Optimizely's CDN goes down or is inaccessible, the SDKs don't have a default fallback mechanism to evaluate feature flags without access to the datafile hosted on the CDN.
It's possible to initialize the SDK with a cached datafile, but that requires custom logic. Ideally, Optimizely could provide a default mechanism to provide a fallback datafile (e.g., a "relay proxy" service that caches the datafile, or a default mechanism within the SDKs).
1 vote -
Introduction of filters
It would be great to have some kind of filters that allow a deeper dive into data. So if we wanted to see how many people bought a licence AFTER triggering a specified conversion
1 vote
- Don't see your idea?