56 results found
-
Archive events in archived projects
The list events endpoint currently returns all events, including those from archived projects. This becomes problematic when there are a large number of such events. Could you enhance the archive project functionality so that it also archives the events within the project?
1 vote -
IP Filtering
IP filtering lets you exclude certain IP ranges from showing up in your experiment results. This is also how you can exclude yourself or your company from experiment results. - https://support.optimizely.com/hc/en-us/articles/4410283982989-IP-Filtering-Exclude-IP-addresses-or-ranges-from-your-results
This is currently available in Web Experimentation but not in Feature Experimentation.
Internal stakeholders and engineers are regularly forcing themselves into experiments to demo and debug, and this will be impacting our results. We would like to be able to exclude these
1 vote -
Provide SRM monitoring on the results page
Hey!
As Team Lead Web Analytics I often see teams struggling to solve SRM issues.
It would be immensely helpful if the results interface would provide a graph depicting the user distribution between variants. This seems like a low-hanging fruit for Optimizely and would facilitate debugging SRMs strongly.
Thanks!
1 vote -
1 vote
-
Rule Promotion/Copy
As a rule administrator in feature experimentation, I want to be able to move a single rule between environments without have to copy all rules from one environment to another and having those all reset to draft in the target environment.
3 votes -
Enhanced Search Functionality in Optimizely Experimentation
Problem:
Currently, the search function in Optimizely Experimentation only filters by a single word, even if multiple words are entered. For example, if a user searches for “Headers for PDP”, the results only return matches for “Headers”. This limits discoverability and forces users to rely on entering very specific single keywords.Proposed Solution:
Update the search functionality to apply all words entered in the query, rather than restricting results to the first word only. This way, users can perform both broad and specific searches as needed.Reasoning / Business Impact:
For customers running numerous tests, Efficiently searching and reviewing experiments…3 votes -
Changing the State of a Feature Flag in Draft Status Should Notify User that Flag is Not Yet in Running State
If a flag status is Draft and a user attempts to change the value of that flag, the UI should notify the user that they are changing a flag that is not yet in Running state.
This will prevent cases where someone thinks they have changed a flag state, but as far as the system is concerned, they have not.
1 vote -
Allow changing project target URL without deleting changes
In order to support testing of more complicated experiments that require server side code changes, in tandem with experiment JS/CSS changes, please allow the changing of a project target URL without deleting all the experiment changes.
RapidX already has the ability to do this (they have a button that can bypass the change deletion behavior), meaning the use case is valid and the code has already been written. I'm not sure if it's an oversight or if there is a justifiable design reason why customers cannot do the same.
Since we control both the experiment change selectors and the server…
3 votes -
Add Hyperlinks to Experiment Names in Opti ID MAU Dashboard
In the Opti ID > Usage & Billing > MAU Breakdown Dashboard, the “Experiment Name” column currently displays plain text for each experiment. While this provides identification, users must manually search for the experiment within the relevant Optimizely project to view or edit its details. This extra step slows down workflows, especially for customers or internal teams who need to investigate experiment configuration, performance, or attribution quickly.
Solution: Enhance the “Experiment Name” field in the Opti ID MAU dashboard to display each experiment name as a clickable hyperlink. The link should point directly to the experiment’s details page. The link…
1 vote -
Display bucketing ranges in the GUI (when changing traffic allocation)
I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
I am aware of Customer Profile Service but I see this as independent from that.1 vote -
Display bucketing ranges in the GUI (when changing traffic allocation)
I know that Optimizely tries to keep bucketing consistent when traffic allocation and or distribution changes.
It would be great to be able to see the bucketing allocation (eg. 1-5000 for A, 5001-10000 for B) in the interface.
In this example it is simple, but when ramping up (and possibly at the same time changing the distribution of traffic) it would be great to be able to verify via the interface which buckets are being set.
I am aware of Customer Profile Service but I see this as independent from that.1 vote -
Disable losing variant(s) during experiment
As of now, in Feature Experimentation, if a variant performs (very) bad, there is no way to deactivate it or set its behavior back to baseline without needing to create a new rule.
This of course slows down experimentation speed.
I do understand that the results of that variant are not usable after setting a variant's behavior back to baseline behavior. However, that is not the issue. The goal is to simply be able to continue letting the test run while disabling a bad-performing variant.1 vote -
Folder/Organization System
I would like the ability to create folders within projects to organize our work. This would allow users to organize work by developer or by area of the site.
3 votes -
JIRA Integration for Feature Flags
Unfortunately, in the new Flags UI the JIRA Integration is no longer available (as it is not yet migrated). My Idea Post is about requesting its availability.
In our Company we have a very close relationship between JIRA Tickets + Experiment Rules (1:1). That's why, this integration is/was helpful to relate Code/Work accordingly.
Many thanks in advance
Michael2 votes -
Datafile Relay Proxy
If Optimizely's CDN goes down or is inaccessible, the SDKs don't have a default fallback mechanism to evaluate feature flags without access to the datafile hosted on the CDN.
It's possible to initialize the SDK with a cached datafile, but that requires custom logic. Ideally, Optimizely could provide a default mechanism to provide a fallback datafile (e.g., a "relay proxy" service that caches the datafile, or a default mechanism within the SDKs).
1 vote -
Filter metrics by linked 'event'
In the overview page of metrics hub, I would like to filter by linked event. So I can see all the existing metrics linked to a specific event. Currently it's only possible to filter by metrics name, id or description.
2 votes -
Introduction of filters
It would be great to have some kind of filters that allow a deeper dive into data. So if we wanted to see how many people bought a licence AFTER triggering a specified conversion
1 vote -
Triggering tests
It is sometimes hard to trigger a test variant when deeper into a flow. Could a bookmark be created to easily trigger a test on any given page?
2 votes -
How long to get statistical significance
When the platform says another 50k users to reach significance, could a date be added? E.g. it has taken 3 weeks to get to 25k so give the date in 3 weeks as the date when the 50k will be reached
4 votes -
Remove unused experiments from user profile service
Currently, user profile service (UPS) maintains a map of user IDs to the experiment IDs they've previously been exposed to and the variation ID they received. The SDKs continuously append to the UPS without any cleanup, even if an experiment has been concluded and is no longer relevant.
Customers have in the past implemented diff logic to compare the live experiment IDs in the datafile vs UPS and remove experiment IDs from the UPS that are no longer in the datafile.
2 votes
- Don't see your idea?