Skip to content

FIX - add alpha to SLOPE penalty #316

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 13, 2025

Conversation

jolars
Copy link
Contributor

@jolars jolars commented Jun 10, 2025

Context of the PR

See #315

Contributions of the PR

Add alpha to make repr() work on objects created with the SLOPE penalty. Closes #315.

Checks before merging PR

  • added documentation for any new feature
  • added unit tests
  • edited the what's new (if applicable)

jolars added 2 commits June 10, 2025 10:55
Add `alpha` to make `repr()` work on objects created with the SLOPE
penalty. Closes scikit-learn-contrib#315.
Copy link
Collaborator

@Badr-MOUFAD Badr-MOUFAD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thx @jolars

@mathurinm
Copy link
Collaborator

Sorry for the delay, my only concern is that this introduces redundancy
but for a fixed decay ratio this may make parameter tuning easier (only one param, alpha, to tune).

@jolars is it handled the same way in other packages ?

@jolars
Copy link
Contributor Author

jolars commented Jun 13, 2025

Actually I don't really have an opinion on how to handle this. My only interest in this is really to fix the bug and prevent the benchopt tests from erroring out 😄 . So I'm happy with any solution, really. That being said, I think it's worthwhile to have alpha be a parameter since it is almost almost this parameter along with whatever other variables (such as q) you might use to parameterize the penalty sequence. I do, however, think that the use of both alpha and alphas to mean essentially different things is a little confusing, but this way at least you don't break anything.

@jolars
Copy link
Contributor Author

jolars commented Jun 13, 2025

I forgot to answer your question. Yes, at least the packages that I have made always have both an argument for the scaling of the penalty sequence and the penalty sequence itself. Yes it's redundant, but also practical.

@mathurinm mathurinm merged commit 1d03cdb into scikit-learn-contrib:main Jun 13, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BUG - attribute alpha is not available for penalty=SLOPE() for string representation
3 participants