Death of the URL

  • (2 Pages)
  • +
  • 1
  • 2

25 Replies - 2662 Views - Last Post: 07 May 2014 - 06:51 AM

#1 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,746
  • Joined: 19-March 11

Death of the URL

Posted 01 May 2014 - 05:09 PM

So for a while now I've thought that the web needs to lose its focus on the URL. Not that we should get rid of the URL as a means of locating resources uniformly, but that it's a lousy way for users to locate pages, because it forces all businesses, organizations, etc., into a single vast namespace. Since people locate by url, your real world name has to be in close correspondence to your domain name - which is just kind of stupid in most cases.
If you take the URL out of the picture, and ask people to find by link or by search, then all of a sudden, life is better. There's less focus on finding an available domain and making the best of it, or making stupid puns based on the two-letter country codes, and more focus on providing a service that people want.

So it looks like Google is inching in that direction, at least: in a new build of Chrome, there's no provision for directly entering the URL in the search bar. This is a half-step in the right direction, I guess, but that's a start. It'll be interesting to see how it pans out.


Links:

http://www.allenpike...urying-the-url/

http://garybacon.com...-chrome-canary/

(can I get the person responsible for "awesome bar" killed? That would be great, thanks!)

Is This A Good Question/Topic? 2
  • +

Replies To: Death of the URL

#2 no2pencil  Icon User is online

  • Toubabo Koomi
  • member icon

Reputation: 5191
  • View blog
  • Posts: 26,901
  • Joined: 10-May 07

Re: Death of the URL

Posted 01 May 2014 - 05:17 PM

AOL Keyword : URL
Was This Post Helpful? 1
  • +
  • -

#3 Ntwiles  Icon User is offline

  • D.I.C Addict

Reputation: 148
  • View blog
  • Posts: 825
  • Joined: 26-May 10

Re: Death of the URL

Posted 01 May 2014 - 07:22 PM

Interesting. I'd never thought about this issue. Apparently there are arguments in going both directions:

http://2013.jsconf.e...ng-the-web.html

Still trying to decide where I stand. The file system doesn't need to be transparent for desktop applications, so why is it for web applications? As modern web applications evolve, users attempting to send links to dynamically created pages becomes more and more error prone. The hierarchical layout of urls can be a little restricting for some modern web applications.

That being said, the tools exist (especially when using url rewriting functions) to circumvent these problems. Nowadays, a URL can mean whatever the hell you want it to mean, and look however you want it to look. Even in single page applications, URLs can be used to modify parameters or start the application in a particular state, in the same way command line arguments do in desktop applications.

url.com/does-not/have_to/look12345/like%20this%20anymore.php.

Developers are just choosing not to make full use of the options they have.

Edit: From the video:

Quote

If you think "I program using HTML, CSS, Javascript. That's what makes me a web developer", no, sorry, that's incorrect. The thing that makes you a web developer is if you build apps that have URLs. Because if you don't do that, it's literally not better than putting an .exe on an FTP site for people to download.


Very interesting point.

This post has been edited by Ntwiles: 01 May 2014 - 07:50 PM

Was This Post Helpful? 1
  • +
  • -

#4 baavgai  Icon User is offline

  • Dreaming Coder
  • member icon

Reputation: 5780
  • View blog
  • Posts: 12,596
  • Joined: 16-October 07

Re: Death of the URL

Posted 02 May 2014 - 03:07 AM

Unique identifiers ARE ugly, almost as a function of their use case. That hardly matters: there are domains in which they are required, regardless of how aesthetically distasteful you find that identifier

So, if I'm understanding this, browsers are dumbing down how they reference that identifier? Who cares what the UI is doing? At some level, that browser needs to talk HTTP and do all kinds of icky technical things. Just because your browser treats you like an idiot doesn't mean it's going to stop using protocols it needs to perform it's function.

Why do developers make "helpful" design decisions that fix no problems and confuse users finally comfortable with their product? It happens all the time. Last night I spent a half an hour putting back a little orange navigation button that used to be in firefox, because the latest release just dropped it. Developers often enjoy change in their programs; few users do.

The URL is in no danger. The long suffering user's ability to easily get at it might be.
Was This Post Helpful? 4
  • +
  • -

#5 depricated  Icon User is offline

  • DLN-000

Reputation: 591
  • View blog
  • Posts: 2,118
  • Joined: 13-September 08

Re: Death of the URL

Posted 02 May 2014 - 05:58 AM

I'm wondering how much of this is to do with actual URLs and how much is misplaced focus on Domains.

This change to Chrome does hide the URL, but I think you'd need to present a pretty robust alternative before I was ready to walk away from using a unique string to identify a network path.

URLs aren't only the domain names that we associate with popular websites, they're the full string that can include accessible data. Dumping URLs as developers would require rewriting whole skads of software, and would neuter PHP and ASP. I'm not opposed to change and progress, but honestly I can't think of a more human friendly method of conveying an address than domain masked URLs.

Being able to leave the TLD off is cool and all, I guess, when we're talking about typing Amazon in as your URL and it taking you to the Amazon homepage. But what about when you type in WhiteHouse? Is that still a porn site? I can't check right now. Which TLD does it take you to? COM? GOV? NET? Domain Names actually AREN'T part of the URL at all. They're masks on Domain Name Servers that translate IP Addresses into associated, more human-friendly terms. Knowing what I'm doing I can just go to 74.125.225.41 instead of entering in the domain and tld for google.com. On more than one occasion I've identified the source of network "outages" based on my ability to navigate by non-masked URL. When Cincinnati Bell's DNS server went down I identified it before they did and switched over to using Google's public DNS on 8.8.8.8.

I suppose part of my concern is that the hierarchy is important. The nesting is important. It's so ingrained in me I couldn't think of another way to access C:\Program Files\ except by URL or GUI navigation - and I'm much faster than my GUI.

This post has been edited by depricated: 02 May 2014 - 06:06 AM

Was This Post Helpful? 0
  • +
  • -

#6 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,746
  • Joined: 19-March 11

Re: Death of the URL

Posted 02 May 2014 - 06:45 PM

View Postbaavgai, on 02 May 2014 - 05:07 AM, said:

Unique identifiers ARE ugly, almost as a function of their use case. That hardly matters: there are domains in which they are required, regardless of how aesthetically distasteful you find that identifier


Let's be a little clear about what's going on here. Obviously, the URL is in no danger. It seems almost unbearably obvious that a web requires that resources be uniquely located - I think I said as much in the original post. The question is about the use of the URL by end users, and how best to get people to the content that they want.

The change is about "killing" end user awareness of the URL - which I would argue is long overdue.

Quote

So, if I'm understanding this, browsers are dumbing down how they reference that identifier?



No, I don't think that's it. I would say it's the opposite - the web is smartening up how it references that handle, and ideally, pushing that handle out of view.
Really - is there any particular reason why the URL should be the thing that people have to know to get at the content? If there's a way to get the user to the content they want more directly, I would think that would be a good thing. And I see the Chrome step as inching a little towards that - good for them.
Was This Post Helpful? 0
  • +
  • -

#7 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,746
  • Joined: 19-March 11

Re: Death of the URL

Posted 02 May 2014 - 07:09 PM

View Postdepricated, on 02 May 2014 - 07:58 AM, said:

I'm wondering how much of this is to do with actual URLs and how much is misplaced focus on Domains.


I think you're on the right track here, but of course if a user ever thinks about a domain, they're thinking about a URL (since the domain name, with TLD, is a URL these days) and when they think about a URL they're mostly thinking about a domain name. And of course the question is, why should the end user ever think about a domain name at all - ever?

Those of us in the early stages of crusty old farthood - that is, people who were there when the web was a cute alternative to FTP - remember how hard it was to teach the ordinary mortals how to use it. "No, grandma, you can't just type 'burger king', you have to type 'http://burgerking.com' - oh wait, no, it's 'http://bk.com'.
I think grandma was right - that's the wrong way to do it. And when browsers started allowing search in navigation bar, people started using it, because URLs are dumb.
So now, I think, it's time to take the next step and push the url back into the "developer tools" part of things, with the other bits and bobs that don't really matter to most of the people using the thing.

Quote

This change to Chrome does hide the URL, but I think you'd need to present a pretty robust alternative before I was ready to walk away from using a unique string to identify a network path.
URLs aren't only the domain names that we associate with popular websites, they're the full string that can include accessible data. Dumping URLs as developers would require rewriting whole skads of software, and would neuter PHP and ASP. I'm not opposed to change and progress, but honestly I can't think of a more human friendly method of conveying an address than domain masked URLs.


Again, I thought it was pretty clear, but of course you're right, and no such alternative is coming, and developers are not going to dump URLs. The URL works pretty well for machines. For people - less so. The underlying code stays the same and works the same, because that works fine. What changes is the way people interact with that code, which is pretty badly made, and is really just a byproduct of the lack of decent search in the early years of the web.

Quote

Domain Names actually AREN'T part of the URL at all. They're masks on Domain Name Servers that translate IP Addresses into associated, more human-friendly terms.


I think this is incorrect - certainly it's not common usage, but I think it's also incorrect in the nitpicky sense as well. I also note that wikipedia agrees with me on this. So there. :)/>/>
I'm also not sure that it's all that significant to the issue at hand. (see above)

Quote

I suppose part of my concern is that the hierarchy is important. The nesting is important. It's so ingrained in me I couldn't think of another way to access C:\Program Files\ except by URL or GUI navigation - and I'm much faster than my GUI.


Not sure what this has to do with changing the way we navigate the web?

This post has been edited by jon.kiparsky: 02 May 2014 - 07:10 PM

Was This Post Helpful? 0
  • +
  • -

#8 Ntwiles  Icon User is offline

  • D.I.C Addict

Reputation: 148
  • View blog
  • Posts: 825
  • Joined: 26-May 10

Re: Death of the URL

Posted 02 May 2014 - 07:54 PM

So I'm all for dropping the use of URLs for getting around a domain, or for accessing a specific page of a domain, but what about getting to the domain in the first place? How does that work? We're really supposed to drop any kind of unique identifier and go to strictly searching for the content we want? Do we really want to put that much faith into modern search algorithms? I'm not sure I do.

View Postdepricated, on 02 May 2014 - 07:58 AM, said:

I suppose part of my concern is that the hierarchy is important. The nesting is important. It's so ingrained in me I couldn't think of another way to access C:\Program Files\ except by URL or GUI navigation


First, the fact that it's been ingrained in your mind doesn't necessarily make it the best system. Secondly, you DO access files in a way other than by URL, every time you run a desktop application. The location of the files which are being accessed at any given time just aren't made transparent to you. Because it's information useful to the developer, but not to the end user. This is the argument being made against the URL.

View Postdepricated, on 02 May 2014 - 07:58 AM, said:

...and I'm much faster than my GUI.


Not a chance. Not GUI done right.

This post has been edited by Ntwiles: 02 May 2014 - 07:56 PM

Was This Post Helpful? 0
  • +
  • -

#9 baavgai  Icon User is offline

  • Dreaming Coder
  • member icon

Reputation: 5780
  • View blog
  • Posts: 12,596
  • Joined: 16-October 07

Re: Death of the URL

Posted 03 May 2014 - 07:59 AM

View Postjon.kiparsky, on 02 May 2014 - 08:45 PM, said:

If there's a way to get the user to the content they want more directly, I would think that would be a good thing.


I agree completely.

I'm just not sure if, "do this in the browser, then this, then that. Oh, wait, you don't have browser X?" is more direct than "Here's a long, apparently meaningless string of characters that will get you directly to the page."

Or, as in the first page example, "go here, search for this, pick the second one." That's probably my biggest problem. I still need a way to directly identify a page in question.

Currently, most users can ignore the address bar without fear once they get to site they want. Once at the site, they use the tools on the site to get where they want. Hiding the mess in the address bar won't change this behavior, though it will make some sites not on board with a particular browser's URL hiding mechanism more confusing for users that take advantage of it.
Was This Post Helpful? 1
  • +
  • -

#10 depricated  Icon User is offline

  • DLN-000

Reputation: 591
  • View blog
  • Posts: 2,118
  • Joined: 13-September 08

Re: Death of the URL

Posted 03 May 2014 - 09:01 AM

View Postjon.kiparsky, on 02 May 2014 - 08:09 PM, said:

I think this is incorrect - certainly it's not common usage, but I think it's also incorrect in the nitpicky sense as well. I also note that wikipedia agrees with me on this. So there. :)/>/>/>
I'm also not sure that it's all that significant to the issue at hand. (see above)


I'm not sure I explained what I mean properly then. It seems, to me, more focused on domain names than on the actual URL (i.e. the difference between dreamincode.net and dreamincode.net/forums/index.php) - but maybe I'm misunderstanding. Reading later posts I think I am, and it's not against the domain name so much as the way the server is accessed once there?

got a laugh at the wikipedia agrees with me bit

but what I mean about the domain name not being part of the URL is that it's not necessary to access it. I could use http://216.68.248.249 for the exact same result as http://google.com - because all typing google.com does is send the domain name and tld to the DNS I use (8.8.8.8, ironically for this example) and return the registered IP address for that domain. In the background, it translates the domain name via the DNS in order to access the server. But you're right, that's probably nitpicky, not what I intended. I thought it was relevant.

Quote

Not sure what this has to do with changing the way we navigate the web?

I'm just comparing local navigation with network navigation, which is extremely similar in the interface. Any sort of navigation that works on a networked file structure should also work on a local file structure.

Ntwiles said:

First, the fact that it's been ingrained in your mind doesn't necessarily make it the best system.
Sorry I'm not saying it makes it the best system, I'm saying it makes it difficult for me to think past that system. I'm pointing out a personal failing that gets in the way, not a justification for the current system.

Quote

Secondly, you DO access files in a way other than by URL, every time you run a desktop application. The location of the files which are being accessed at any given time just aren't made transparent to you.

You mean the file index? Which is still dependent on unique string identifiers that are little different from URLs for any sort of high end access, be it a LoadFile function in C# that needs the path to the file as a string, or a shortcut on the desktop that stores the location in a URL-like string.

Quote

Not a chance. Not GUI done right.
I'm much faster than Windows, Firefox, and Opera. Probably IE and Safari but I don't use those. My roommate and I have a joke about the "click tax" - when you click on a textbox before the page finishes loading and type in what you need to, only to have the form dump when the page finishes loading. I might be conflating slow page loads with performance though, so it's a bad comparison.
Was This Post Helpful? 0
  • +
  • -

#11 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,746
  • Joined: 19-March 11

Re: Death of the URL

Posted 03 May 2014 - 10:19 AM

View Postbaavgai, on 03 May 2014 - 09:59 AM, said:

I'm just not sure if, "do this in the browser, then this, then that. Oh, wait, you don't have browser X?" is more direct than "Here's a long, apparently meaningless string of characters that will get you directly to the page."


I was thinking more about individual navigation - "I want to find out whether Raven Books has a copy of Piketty's 'Economics in the 21st Century'", not "here's how to get to the page for Piketty's book on Raven Books' site". And for this, the URL is not the way to go. Search in the nav bar seems to be proving that this is a popular opinion, if not unanimous, and frankly I'd still like to see a return to the good old links page, which has mostly faded from view, and is much to be lamented. (Yes, this requires URLs: again, I don't mind thinking about a URL at design time, if necessary, but if I use a CMS that thinks about them for me, and does it well, then I'm happier)

But thinking about sharing content in the fly - yes, I agree that this is a UI problem to be solved. Passing links around is an important user demand, and one worth satisfying. And there are some basic problems with simply "copy the gibberish and post it into an email/text it/write it down on a napkin/whatever". In the Old Web, this worked great, because a URL was actually a path to a file on a file system. That's no longer likely to be true, and the relations of the contents of the browser bar to the contents of the screen are less obvious. Hence the proliferation of "share" buttons and permalinks. So now we have sites offering to wrap up the 'path to here' information in a nice bundle that works great for your browser. And that 'path to here' information is ultimately a URL - but again, I don't think the URL is what the user cares about. When I think about the assertion that "baavgai thinks I should look at the page at [some URL here] because I talked about a China Mieville book", the things I care about are "baavgai thinks I should look at this" and "because China Mieville" - the URL is noise.
I don't think there's an obvious answer, but the desiderata are clear: we'd like the user to be able to pass reference to something interesting to some other party, and not have to think about the low-level facts of "how does the web find this?" - sort of like, as programmers, we'd like to be able to think about high-level data structures and not about pointers and memory management. We want high-level interface on the web.

And yes, I realize that this starts to sound like a manifesto. I don't mean it to be, really.
Was This Post Helpful? 0
  • +
  • -

#12 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,746
  • Joined: 19-March 11

Re: Death of the URL

Posted 03 May 2014 - 10:32 AM

View PostNtwiles, on 02 May 2014 - 09:54 PM, said:

So I'm all for dropping the use of URLs for getting around a domain, or for accessing a specific page of a domain, but what about getting to the domain in the first place? How does that work? We're really supposed to drop any kind of unique identifier and go to strictly searching for the content we want? Do we really want to put that much faith into modern search algorithms? I'm not sure I do.


To answer this question, you can try an experiment. Using any modern browser that incorporates search in the nav bar, try spending a week navigating only by search. See how that works for you.
I found that when I did this I started thinking more about content and less about sites - and getting better results. Sometimes I care about sites - for example, if I'm trying to figure out how to use some particular javascript function, I would really rather not have w3schools come up as a result - but generally I'm not as good at finding stuff as a search engine is.
So now I find that there are some sites that I want to go to particularly, because I monitor their content. Those are sites that I have bookmarked (xkcd, thesession.org, dic, a few others). Sometimes I want to shop at a particular merchant for a product - for example, if I can get a title at Powell's I'd rather do that. So I include "powell's books" in the search for that title, and I I'm good. This is actually faster for me than going to "powells.com" and using their search function, it turns out.
So I don't think this is such a difficult problem.

Quote

View Postdepricated, on 02 May 2014 - 07:58 AM, said:

...and I'm much faster than my GUI.


Not a chance. Not GUI done right.


Actually, I have to side with dep on this. I don't know of any serious users who are familiar with both CLI and GUI who find that the GUI is faster than directly navigating their file system. I haven't used Windows enough to say anything about the current state of the shell there, but on a compliant OS I believe it's always going to be faster to do filesystem tasks from the command line, provided you are familiar with the shell.
The benefit of the GUI is that it allows you to preserve your ignorance and still get stuff done, sorta. It's not that it provides any particular task efficiency.


Quote

I'm just comparing local navigation with network navigation, which is extremely similar in the interface. Any sort of navigation that works on a networked file structure should also work on a local file structure.


This is an artifact of the Old Web, where you were actually just getting a front-end on FTP and some standard markup on the returned results. Year on year, I think you're seeing and will continue to see less relation between file structures and navigation. I'm thinking of course about django as my main example, where the part of the url following the domain is really just a DSL providing instructions to the web application. It bears no relation at all to files on the file system, and if you do it well it also reveals nothing about the database model or the keys used by the database. So the request is completely decoupled from the representation, which is great.

This post has been edited by jon.kiparsky: 03 May 2014 - 10:37 AM

Was This Post Helpful? 0
  • +
  • -

#13 Ntwiles  Icon User is offline

  • D.I.C Addict

Reputation: 148
  • View blog
  • Posts: 825
  • Joined: 26-May 10

Re: Death of the URL

Posted 03 May 2014 - 01:12 PM

View Postdepricated, on 03 May 2014 - 11:01 AM, said:

You mean the file index? Which is still dependent on unique string identifiers that are little different from URLs for any sort of high end access, be it a LoadFile function in C# that needs the path to the file as a string, or a shortcut on the desktop that stores the location in a URL-like string.


File paths and URLs look so similar because they are the same thing. Unless you're using url rewriting functions, an URL is just a path to a data file or a script. Nobody is trying to totally redesign how file systems work, that would still be the same. The URL of each file would still be used by the web developer in the backend, it just wouldn't be transparent to the end user anymore.

View Postdepricated, on 03 May 2014 - 11:01 AM, said:

I'm much faster than Windows, Firefox, and Opera. Probably IE and Safari but I don't use those. My roommate and I have a joke about the "click tax" - when you click on a textbox before the page finishes loading and type in what you need to, only to have the form dump when the page finishes loading. I might be conflating slow page loads with performance though, so it's a bad comparison.


View Postjon.kiparsky, on 03 May 2014 - 12:32 PM, said:

Actually, I have to side with dep on this. I don't know of any serious users who are familiar with both CLI and GUI who find that the GUI is faster than directly navigating their file system. I haven't used Windows enough to say anything about the current state of the shell there, but on a compliant OS I believe it's always going to be faster to do filesystem tasks from the command line, provided you are familiar with the shell.
The benefit of the GUI is that it allows you to preserve your ignorance and still get stuff done, sorta. It's not that it provides any particular task efficiency.


Edit: Sorry, forgot to edit in my reply. I guess that was a little hasty of me to say. More convenient != faster, you guys are right.

View Postjon.kiparsky, on 03 May 2014 - 12:32 PM, said:

To answer this question, you can try an experiment. Using any modern browser that incorporates search in the nav bar, try spending a week navigating only by search. See how that works for you.
I found that when I did this I started thinking more about content and less about sites - and getting better results. Sometimes I care about sites - for example, if I'm trying to figure out how to use some particular javascript function, I would really rather not have w3schools come up as a result - but generally I'm not as good at finding stuff as a search engine is.
So now I find that there are some sites that I want to go to particularly, because I monitor their content. Those are sites that I have bookmarked (xkcd, thesession.org, dic, a few others). Sometimes I want to shop at a particular merchant for a product - for example, if I can get a title at Powell's I'd rather do that. So I include "powell's books" in the search for that title, and I I'm good. This is actually faster for me than going to "powells.com" and using their search function, it turns out.
So I don't think this is such a difficult problem.


That's all well and good, but I just don't see that working in every case. What about the validity that comes from owning a particular domain? What if I found a jacket on a nice online store, but a malicious developer decided to copy the content of that site line by line, with a few nasty hidden surprises? How does the end user identify the correct (and safe) website, based on a search, with no domain to to prove "Hey, this is the REAL jackets.com"

Do I really have to bookmark EVERY site I want to return to later? What if three months later I decide I want to go back and buy another jacket, from that same site? Do I just try to search the for the same jacket and find the same site again through that route? Good luck, it's not going to work.

You could say "Well the site doesn't have a domain, but it still has a name. You can search for that". Sure, in some cases. But we aren't going to have a domain system forcing uniqueness in names anymore. How many sites named "Men's Fashion HQ" or "Jacket Emporium" are going to start popping up? Corny titles, but you get the idea.

Here's another one. What If I want a jacket from Buckle, but I want to get it from their official online store? I can't just search "Buckle jackets". I'll get results from 50 other online stores selling Buckle jackets, with the official Buckle store mixed in there somewhere. Should I search "official buckle store" and hope for the best? Not the kind of search I want to put my faith in.

And those are all just problems with jackets. Think of how bad the sweater vests will be.

This post has been edited by Ntwiles: 03 May 2014 - 01:15 PM

Was This Post Helpful? 0
  • +
  • -

#14 depricated  Icon User is offline

  • DLN-000

Reputation: 591
  • View blog
  • Posts: 2,118
  • Joined: 13-September 08

Re: Death of the URL

Posted 03 May 2014 - 11:07 PM

View Postjon.kiparsky, on 03 May 2014 - 11:32 AM, said:

you can try an experiment. Using any modern browser that incorporates search in the nav bar, try spending a week navigating only by search. See how that works for you.
This sounds really interesting, I'm going to try it and see how it works. For sites like DIC though should I not use bookmarks?
[quote]

View Postjon.kiparsky, on 03 May 2014 - 11:32 AM, said:

Actually, I have to side with dep on this. I don't know of any serious users who are familiar with both CLI and GUI who find that the GUI is faster than directly navigating their file system. I haven't used Windows enough to say anything about the current state of the shell there, but on a compliant OS I believe it's always going to be faster to do filesystem tasks from the command line, provided you are familiar with the shell.
The benefit of the GUI is that it allows you to preserve your ignorance and still get stuff done, sorta. It's not that it provides any particular task efficiency.
Exactly. The example that came to mind was of the lag between how fast I can click to a field and type something before the GUI logic kicks in, wipes the field, and steals the cursor (I abhor that behavior, as an aside - worst functionality you could possibly write into a program, and it has to be intentional which makes it worse). But yes, navigating my file system by typing in where I want to go is really what I meant. If I'm using Windows Explorer I'll just enter H:\downloads for my download folder, for example, rather than click on H, wait for it to load, scroll through to D and select Downloads.

Quote

This is an artifact of the Old Web, where you were actually just getting a front-end on FTP and some standard markup on the returned results. Year on year, I think you're seeing and will continue to see less relation between file structures and navigation. I'm thinking of course about django as my main example, where the part of the url following the domain is really just a DSL providing instructions to the web application. It bears no relation at all to files on the file system, and if you do it well it also reveals nothing about the database model or the keys used by the database. So the request is completely decoupled from the representation, which is great.
I know exactly what you mean. I use ASP.NET at work and I've been learning how it works, and that's exactly it. The URL is data used by the application rather than a path to a file on a server. That said, this is actually exactly what I meant too. It IS part of the navigation, insofar as it tells the application what output to return, and since it works on the network it works locally as well. If I navigate to localhost:(port) it will load the application, and the url will function around localhost just as it would any other domain. That's what I mean by, if it works on the network it should work locally. Not that it should be the only system working locally, but that the functionality shouldn't be network-dependent, if that makes sense?


View PostNtwiles, on 03 May 2014 - 02:12 PM, said:

File paths and URLs look so similar because they are the same thing. Unless you're using url rewriting functions, an URL is just a path to a data file or a script. Nobody is trying to totally redesign how file systems work, that would still be the same. The URL of each file would still be used by the web developer in the backend, it just wouldn't be transparent to the end user anymore.
Which is exactly what I was saying. The only appreciable difference is on the back end, and the only practical difference is that URL refers to a very specific type of string identifier. A file path is not a URL, but a URL can be a file path. Think of it like inheritance: Class URL Extends Pathname (though with the above, this is becoming less and less true).

View PostNtwiles, on 03 May 2014 - 02:12 PM, said:

snip(last section)

I agree that simply searching for content isn't necessarily the best way. Utilizing a botnet (not something at my disposal, but someone who might do this may have one) it wouldn't be overly difficult to weight Google search results (since they're popularity based - thus "SEO" [blargh]) to point to a spoof website instead of the real thing. It would make it very easy to get people to load your malicious ActiveX control and take over from there.
Was This Post Helpful? 0
  • +
  • -

#15 Lieoften  Icon User is offline

  • D.I.C Head

Reputation: 17
  • View blog
  • Posts: 244
  • Joined: 06-January 10

Re: Death of the URL

Posted 04 May 2014 - 01:20 PM

I don't think URL's should be killed off; I can understand altering the way they work, but killing them off completely is just a stupid Idea. But that would most likely go back to the fact that everyone wants everything done stupidly today (People in this forum and other forums for that matter; i'm referring to the facebook/reddit masses). I may be coming from an Old School Internet approach, but what Google is trying to do with canary is essentially what AOL did with their Keyword searches.

I don't want my internet to be funneled to me--I Don't want to have to go through google or facebook or reddit or what have you in order to find my internet fix for the day. Because those things are powered by the masses, and I have vastly different tastes than the swarms of 14-17 year old teenyboppers who cling to /r/funny all day, telling that cat to hang on. I mean, FFS; right now i'm listening to A Brief History of Time's audiobook, with another book on wormholes qued up behind it... This is my pleasure listening.

That's my view on the subject from a personal level. From a technical level; I can see where they're coming at but it's still not a good idea. I'm not 100% sure what google's trying to do with the canary browser and such, but I get the feeling they are trying to Phase out domains. I like domains, man.

This post has been edited by Lieoften: 04 May 2014 - 01:23 PM

Was This Post Helpful? 1
  • +
  • -

  • (2 Pages)
  • +
  • 1
  • 2