Feb. 12, 2017, 12:08 p.m.
I was having a hard time with the Laravel auth package. If you use the out-of-the-box Laravel Auth, if you try to access a page you don't have access to the Auth will redirect you to the login page and then after a successful login redirect you back to the page you were trying to access.
This works fine. But if I go to a page and then click on login it would redirect me back to a page specified in the Auth controller instead of to the page I was on before I clicked login. I searched for a while and found some info, but not much addressing this specific issue.
I finally found this thread on Laracasts which gives a simple and easy solution to the problem.
The solution is to override the login form method in the LoginController.php in the app directory. I added this function:
public function showLoginForm(){
if(!session()->has('url.intended')){
session()->put('url.intended', url()->previous());
}
return view('auth.login');
}
This pushes the previous page onto the session as url.intended, which is the same thing the middleware does. But this does it in all cases, not just when the middleware catches an auth error. After login the Auth controllers now send you back to url.intended instead of to the default page specified in $redirectTo.
Feb. 12, 2017, 12:08 p.m.
I just updated the search of my records here so it would load the results using Ajax instead of refreshing the whole page. Everything seemed to work fine, but then I noticed that it broke the Laravel pagination. I included the pagination in the section of the page reloaded by searches and sorts so it would update appropriately, but then the page links just loaded the results and didn't load the results into the div on the page where they were supposed to be.
This was a bit tricky to solve. What I ended up doing was exporting the pagination views to my resources directory, and then editing it there. To each of the pagination links I added two things:
Then I added a script to the page that triggers when you click an object with the class of page-link, that gets the page to be loaded from the data-val attribute and submits that to the script that loads the appropriate page with the appropriate variables. Then I did the same thing for the sort links - instead of having each trigger it's own script I made one script triggered by a click on the class and put the data in the data-val.
Feb. 12, 2017, 12:06 p.m.
I decided I wanted to backup my database somewhere other than my server. All of my code is in git so the only thing that could be lost in case of server errors is the database. To start I wrote a little shell script to dump the database using mysqldump. I wasn't sure where to put the SQL file to keep it off-server. My first thought was to put it in git, which was easy to do in the script. So I updated the script to add the file to git, commit the changes and push the repo up.
After a bit more thought I decided that might not be the best way to do it. It worked fine, but my usual workflow is I make all changes locally and then push it to git and then pull to production - I don't make any changes on the production server unless absolutely necessary. While adding files to git that don't exist in my dev environment shouldn't really cause any problems, I thought there must be a better way.
So I decided to put the dump file into an Amazon S3 bucket. Laravel can use S3 as a filesystem, as documented here, but I had tried to use this before and not had much luck. I saw that Amazon had a PHP package to interact with S3, which is SDKforPHP, so I thought I would try that out. After a little bit more digging I found that Amazon also has a package specifically for Laravel, which is located here. That turned out to be the winner. As opposed to trying to read pages of documentation for Laravel's file system or the Amazon SDK, all I need was a few lines of code and I was up and running. As a note, this package keeps the Amazon Keys and Secrets in the .env file, which is a lot better than keeping them in the filesystem.php config file like Laravel does. If you are going to use Laravel's S3 filesystem I suggest you update the filesystem.php file to pull them from the .env.
Now that I was able to upload files to S3 from a browser, the next step was to create an artisan command that I could add to my shell script. The Laravel documentation for this was clear and easy to follow. The only problem I had was a typo that for some reason didn't throw an error locally, but did on my production server. Other than that this is tested and working.
I had considered using S3 for this site in the past, but decided not to since I had problems with the Laravel S3 filesystem. Now that I've integrated with S3 so easily I may revisit that decision.
Labels: coding
Feb. 12, 2017, 12:06 p.m.
I initially ran into problems pulling the data from discogs because I was using a package that provided a Laravel implementation of cURL called Laracurl, and it didn't provide a header that discogs needed. So I made a change to the package and got it working. After someone advised me that Guzzle was a better package than cURL I switched to that, and in the process rewrote my matching code. After running the new, improved, streamlined code I have now matched all but 250 of my records to the discogs data, and 50 of those unmatched records are white labels which may not be matchable. So I am pretty happy with that ratio and will be moving on to my next project now.
Almost all of my records now have a link to discogs and a thumbnail pulled from discogs in the record detail page. If anyone wants to see additional info on the records such as tracklisting, year of release, or anything else, that info will be available on discogs. I didn't see any need to duplicate that data locally.
The code I wrote to automatically match my records to discogs did not match everything 100% accurately, and I tried to review the matches to make sure they were accurate, but I may have missed a couple here and there.
Thanks to Discogs.com for making a great API.